00:00:00.001 Started by upstream project "autotest-per-patch" build number 132407 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.138 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.138 The recommended git tool is: git 00:00:00.139 using credential 00000000-0000-0000-0000-000000000002 00:00:00.140 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.189 Fetching changes from the remote Git repository 00:00:00.192 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.232 Using shallow fetch with depth 1 00:00:00.232 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.232 > git --version # timeout=10 00:00:00.269 > git --version # 'git version 2.39.2' 00:00:00.270 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.293 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.293 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.717 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.728 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.738 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.738 > git config core.sparsecheckout # timeout=10 00:00:04.748 > git read-tree -mu HEAD # timeout=10 00:00:04.764 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.788 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.789 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.890 [Pipeline] Start of Pipeline 00:00:04.903 [Pipeline] library 00:00:04.905 Loading library shm_lib@master 00:00:04.905 Library shm_lib@master is cached. Copying from home. 00:00:04.923 [Pipeline] node 00:00:04.932 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.934 [Pipeline] { 00:00:04.944 [Pipeline] catchError 00:00:04.946 [Pipeline] { 00:00:04.958 [Pipeline] wrap 00:00:04.967 [Pipeline] { 00:00:04.975 [Pipeline] stage 00:00:04.977 [Pipeline] { (Prologue) 00:00:05.203 [Pipeline] sh 00:00:05.490 + logger -p user.info -t JENKINS-CI 00:00:05.509 [Pipeline] echo 00:00:05.511 Node: CYP9 00:00:05.519 [Pipeline] sh 00:00:05.825 [Pipeline] setCustomBuildProperty 00:00:05.837 [Pipeline] echo 00:00:05.839 Cleanup processes 00:00:05.846 [Pipeline] sh 00:00:06.135 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.135 953014 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.149 [Pipeline] sh 00:00:06.437 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.437 ++ grep -v 'sudo pgrep' 00:00:06.437 ++ awk '{print $1}' 00:00:06.437 + sudo kill -9 00:00:06.437 + true 00:00:06.453 [Pipeline] cleanWs 00:00:06.463 [WS-CLEANUP] Deleting project workspace... 00:00:06.463 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.470 [WS-CLEANUP] done 00:00:06.474 [Pipeline] setCustomBuildProperty 00:00:06.490 [Pipeline] sh 00:00:06.779 + sudo git config --global --replace-all safe.directory '*' 00:00:06.862 [Pipeline] httpRequest 00:00:07.528 [Pipeline] echo 00:00:07.529 Sorcerer 10.211.164.20 is alive 00:00:07.536 [Pipeline] retry 00:00:07.538 [Pipeline] { 00:00:07.551 [Pipeline] httpRequest 00:00:07.556 HttpMethod: GET 00:00:07.556 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.557 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.564 Response Code: HTTP/1.1 200 OK 00:00:07.564 Success: Status code 200 is in the accepted range: 200,404 00:00:07.565 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.675 [Pipeline] } 00:00:11.693 [Pipeline] // retry 00:00:11.701 [Pipeline] sh 00:00:11.994 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.013 [Pipeline] httpRequest 00:00:12.424 [Pipeline] echo 00:00:12.425 Sorcerer 10.211.164.20 is alive 00:00:12.433 [Pipeline] retry 00:00:12.436 [Pipeline] { 00:00:12.449 [Pipeline] httpRequest 00:00:12.453 HttpMethod: GET 00:00:12.453 URL: http://10.211.164.20/packages/spdk_d3dfde8728124ac5e88c53569415fa16f9f8b850.tar.gz 00:00:12.454 Sending request to url: http://10.211.164.20/packages/spdk_d3dfde8728124ac5e88c53569415fa16f9f8b850.tar.gz 00:00:12.467 Response Code: HTTP/1.1 200 OK 00:00:12.467 Success: Status code 200 is in the accepted range: 200,404 00:00:12.468 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_d3dfde8728124ac5e88c53569415fa16f9f8b850.tar.gz 00:02:07.536 [Pipeline] } 00:02:07.554 [Pipeline] // retry 00:02:07.562 [Pipeline] sh 00:02:07.857 + tar --no-same-owner -xf spdk_d3dfde8728124ac5e88c53569415fa16f9f8b850.tar.gz 00:02:11.178 [Pipeline] sh 00:02:11.467 + git -C spdk log --oneline -n5 00:02:11.467 d3dfde872 bdev: Add APIs get metadata config via desc depending on hide_metadata option 00:02:11.467 b6a8866f3 bdev: Add spdk_bdev_open_ext_v2() to support per-open options 00:02:11.467 3bdf5e874 bdev: Locate all hot data in spdk_bdev_desc to the first cache line 00:02:11.467 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:02:11.467 c0b2ac5c9 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:02:11.480 [Pipeline] } 00:02:11.493 [Pipeline] // stage 00:02:11.501 [Pipeline] stage 00:02:11.504 [Pipeline] { (Prepare) 00:02:11.519 [Pipeline] writeFile 00:02:11.535 [Pipeline] sh 00:02:11.823 + logger -p user.info -t JENKINS-CI 00:02:11.838 [Pipeline] sh 00:02:12.128 + logger -p user.info -t JENKINS-CI 00:02:12.144 [Pipeline] sh 00:02:12.432 + cat autorun-spdk.conf 00:02:12.432 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:12.432 SPDK_TEST_NVMF=1 00:02:12.432 SPDK_TEST_NVME_CLI=1 00:02:12.432 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:12.432 SPDK_TEST_NVMF_NICS=e810 00:02:12.432 SPDK_TEST_VFIOUSER=1 00:02:12.432 SPDK_RUN_UBSAN=1 00:02:12.432 NET_TYPE=phy 00:02:12.441 RUN_NIGHTLY=0 00:02:12.445 [Pipeline] readFile 00:02:12.468 [Pipeline] withEnv 00:02:12.470 [Pipeline] { 00:02:12.482 [Pipeline] sh 00:02:12.771 + set -ex 00:02:12.771 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:12.771 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:12.771 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:12.771 ++ SPDK_TEST_NVMF=1 00:02:12.771 ++ SPDK_TEST_NVME_CLI=1 00:02:12.771 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:12.771 ++ SPDK_TEST_NVMF_NICS=e810 00:02:12.771 ++ SPDK_TEST_VFIOUSER=1 00:02:12.771 ++ SPDK_RUN_UBSAN=1 00:02:12.771 ++ NET_TYPE=phy 00:02:12.771 ++ RUN_NIGHTLY=0 00:02:12.771 + case $SPDK_TEST_NVMF_NICS in 00:02:12.771 + DRIVERS=ice 00:02:12.771 + [[ tcp == \r\d\m\a ]] 00:02:12.771 + [[ -n ice ]] 00:02:12.771 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:12.771 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:12.771 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:12.771 rmmod: ERROR: Module irdma is not currently loaded 00:02:12.771 rmmod: ERROR: Module i40iw is not currently loaded 00:02:12.771 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:12.771 + true 00:02:12.771 + for D in $DRIVERS 00:02:12.771 + sudo modprobe ice 00:02:12.771 + exit 0 00:02:12.781 [Pipeline] } 00:02:12.796 [Pipeline] // withEnv 00:02:12.802 [Pipeline] } 00:02:12.815 [Pipeline] // stage 00:02:12.825 [Pipeline] catchError 00:02:12.827 [Pipeline] { 00:02:12.840 [Pipeline] timeout 00:02:12.841 Timeout set to expire in 1 hr 0 min 00:02:12.842 [Pipeline] { 00:02:12.856 [Pipeline] stage 00:02:12.858 [Pipeline] { (Tests) 00:02:12.872 [Pipeline] sh 00:02:13.162 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:13.162 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:13.162 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:13.162 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:13.162 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:13.162 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:13.162 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:13.162 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:13.162 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:13.162 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:13.162 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:13.162 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:13.162 + source /etc/os-release 00:02:13.162 ++ NAME='Fedora Linux' 00:02:13.162 ++ VERSION='39 (Cloud Edition)' 00:02:13.162 ++ ID=fedora 00:02:13.162 ++ VERSION_ID=39 00:02:13.162 ++ VERSION_CODENAME= 00:02:13.162 ++ PLATFORM_ID=platform:f39 00:02:13.162 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:13.162 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:13.162 ++ LOGO=fedora-logo-icon 00:02:13.162 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:13.162 ++ HOME_URL=https://fedoraproject.org/ 00:02:13.162 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:13.162 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:13.162 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:13.162 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:13.162 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:13.162 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:13.162 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:13.162 ++ SUPPORT_END=2024-11-12 00:02:13.162 ++ VARIANT='Cloud Edition' 00:02:13.162 ++ VARIANT_ID=cloud 00:02:13.162 + uname -a 00:02:13.162 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:13.162 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:16.466 Hugepages 00:02:16.466 node hugesize free / total 00:02:16.466 node0 1048576kB 0 / 0 00:02:16.466 node0 2048kB 0 / 0 00:02:16.466 node1 1048576kB 0 / 0 00:02:16.466 node1 2048kB 0 / 0 00:02:16.466 00:02:16.466 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:16.466 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:16.466 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:16.466 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:16.466 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:16.466 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:16.466 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:16.466 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:16.466 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:16.466 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:16.466 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:16.466 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:16.466 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:16.466 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:16.466 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:16.466 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:16.466 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:16.466 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:16.466 + rm -f /tmp/spdk-ld-path 00:02:16.466 + source autorun-spdk.conf 00:02:16.466 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:16.466 ++ SPDK_TEST_NVMF=1 00:02:16.466 ++ SPDK_TEST_NVME_CLI=1 00:02:16.466 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:16.466 ++ SPDK_TEST_NVMF_NICS=e810 00:02:16.466 ++ SPDK_TEST_VFIOUSER=1 00:02:16.466 ++ SPDK_RUN_UBSAN=1 00:02:16.466 ++ NET_TYPE=phy 00:02:16.466 ++ RUN_NIGHTLY=0 00:02:16.466 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:16.466 + [[ -n '' ]] 00:02:16.466 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:16.466 + for M in /var/spdk/build-*-manifest.txt 00:02:16.466 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:16.466 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:16.466 + for M in /var/spdk/build-*-manifest.txt 00:02:16.466 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:16.466 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:16.466 + for M in /var/spdk/build-*-manifest.txt 00:02:16.466 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:16.467 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:16.467 ++ uname 00:02:16.467 + [[ Linux == \L\i\n\u\x ]] 00:02:16.467 + sudo dmesg -T 00:02:16.467 + sudo dmesg --clear 00:02:16.467 + dmesg_pid=953991 00:02:16.467 + [[ Fedora Linux == FreeBSD ]] 00:02:16.467 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:16.467 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:16.467 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:16.467 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:16.467 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:16.467 + [[ -x /usr/src/fio-static/fio ]] 00:02:16.467 + export FIO_BIN=/usr/src/fio-static/fio 00:02:16.467 + sudo dmesg -Tw 00:02:16.467 + FIO_BIN=/usr/src/fio-static/fio 00:02:16.467 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:16.467 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:16.467 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:16.467 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:16.467 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:16.467 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:16.467 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:16.467 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:16.467 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:16.467 15:56:52 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:16.467 15:56:52 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:16.467 15:56:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:16.467 15:56:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:16.467 15:56:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:16.467 15:56:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:16.467 15:56:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:16.467 15:56:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:16.467 15:56:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:16.467 15:56:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:16.467 15:56:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:16.467 15:56:52 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:16.467 15:56:52 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:16.729 15:56:52 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:16.729 15:56:52 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:16.729 15:56:52 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:16.729 15:56:52 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:16.729 15:56:52 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:16.729 15:56:52 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:16.729 15:56:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.729 15:56:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.729 15:56:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.729 15:56:52 -- paths/export.sh@5 -- $ export PATH 00:02:16.729 15:56:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.729 15:56:52 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:16.729 15:56:52 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:16.729 15:56:52 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732114612.XXXXXX 00:02:16.729 15:56:52 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732114612.mafGCT 00:02:16.729 15:56:52 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:16.729 15:56:52 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:16.729 15:56:52 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:16.729 15:56:52 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:16.729 15:56:52 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:16.729 15:56:52 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:16.729 15:56:52 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:16.729 15:56:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.729 15:56:52 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:16.729 15:56:52 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:16.729 15:56:52 -- pm/common@17 -- $ local monitor 00:02:16.729 15:56:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.729 15:56:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.729 15:56:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.729 15:56:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.729 15:56:52 -- pm/common@21 -- $ date +%s 00:02:16.729 15:56:52 -- pm/common@21 -- $ date +%s 00:02:16.729 15:56:52 -- pm/common@25 -- $ sleep 1 00:02:16.729 15:56:52 -- pm/common@21 -- $ date +%s 00:02:16.729 15:56:52 -- pm/common@21 -- $ date +%s 00:02:16.729 15:56:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732114612 00:02:16.729 15:56:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732114612 00:02:16.729 15:56:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732114612 00:02:16.729 15:56:52 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732114612 00:02:16.729 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732114612_collect-vmstat.pm.log 00:02:16.729 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732114612_collect-cpu-load.pm.log 00:02:16.729 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732114612_collect-cpu-temp.pm.log 00:02:16.729 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732114612_collect-bmc-pm.bmc.pm.log 00:02:17.672 15:56:53 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:17.672 15:56:53 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:17.672 15:56:53 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:17.672 15:56:53 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:17.672 15:56:53 -- spdk/autobuild.sh@16 -- $ date -u 00:02:17.672 Wed Nov 20 02:56:53 PM UTC 2024 00:02:17.672 15:56:53 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:17.672 v25.01-pre-222-gd3dfde872 00:02:17.672 15:56:53 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:17.672 15:56:53 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:17.672 15:56:53 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:17.672 15:56:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:17.672 15:56:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:17.672 15:56:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:17.672 ************************************ 00:02:17.672 START TEST ubsan 00:02:17.672 ************************************ 00:02:17.672 15:56:53 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:17.673 using ubsan 00:02:17.673 00:02:17.673 real 0m0.001s 00:02:17.673 user 0m0.000s 00:02:17.673 sys 0m0.000s 00:02:17.673 15:56:53 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:17.673 15:56:53 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:17.673 ************************************ 00:02:17.673 END TEST ubsan 00:02:17.673 ************************************ 00:02:17.933 15:56:53 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:17.933 15:56:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:17.933 15:56:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:17.933 15:56:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:17.933 15:56:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:17.933 15:56:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:17.933 15:56:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:17.933 15:56:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:17.933 15:56:53 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:17.933 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:17.933 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:18.504 Using 'verbs' RDMA provider 00:02:34.495 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:46.728 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:47.300 Creating mk/config.mk...done. 00:02:47.300 Creating mk/cc.flags.mk...done. 00:02:47.300 Type 'make' to build. 00:02:47.300 15:57:22 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:47.300 15:57:22 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:47.300 15:57:22 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:47.300 15:57:22 -- common/autotest_common.sh@10 -- $ set +x 00:02:47.300 ************************************ 00:02:47.300 START TEST make 00:02:47.300 ************************************ 00:02:47.300 15:57:23 make -- common/autotest_common.sh@1129 -- $ make -j144 00:02:47.562 make[1]: Nothing to be done for 'all'. 00:02:49.489 The Meson build system 00:02:49.489 Version: 1.5.0 00:02:49.489 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:49.489 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:49.489 Build type: native build 00:02:49.489 Project name: libvfio-user 00:02:49.489 Project version: 0.0.1 00:02:49.489 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:49.489 C linker for the host machine: cc ld.bfd 2.40-14 00:02:49.489 Host machine cpu family: x86_64 00:02:49.489 Host machine cpu: x86_64 00:02:49.489 Run-time dependency threads found: YES 00:02:49.489 Library dl found: YES 00:02:49.489 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:49.489 Run-time dependency json-c found: YES 0.17 00:02:49.489 Run-time dependency cmocka found: YES 1.1.7 00:02:49.489 Program pytest-3 found: NO 00:02:49.489 Program flake8 found: NO 00:02:49.489 Program misspell-fixer found: NO 00:02:49.489 Program restructuredtext-lint found: NO 00:02:49.489 Program valgrind found: YES (/usr/bin/valgrind) 00:02:49.489 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:49.489 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:49.489 Compiler for C supports arguments -Wwrite-strings: YES 00:02:49.489 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:49.489 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:49.489 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:49.489 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:49.489 Build targets in project: 8 00:02:49.489 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:49.489 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:49.489 00:02:49.489 libvfio-user 0.0.1 00:02:49.489 00:02:49.489 User defined options 00:02:49.489 buildtype : debug 00:02:49.489 default_library: shared 00:02:49.489 libdir : /usr/local/lib 00:02:49.489 00:02:49.489 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:49.489 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:49.750 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:49.750 [2/37] Compiling C object samples/null.p/null.c.o 00:02:49.750 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:49.750 [4/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:49.750 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:49.750 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:49.750 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:49.750 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:49.750 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:49.750 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:49.750 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:49.750 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:49.750 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:49.750 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:49.750 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:49.750 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:49.750 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:49.750 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:49.750 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:49.750 [20/37] Compiling C object samples/server.p/server.c.o 00:02:49.750 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:49.750 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:49.750 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:49.750 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:49.750 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:49.750 [26/37] Compiling C object samples/client.p/client.c.o 00:02:49.751 [27/37] Linking target samples/client 00:02:49.751 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:49.751 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:49.751 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:50.012 [31/37] Linking target test/unit_tests 00:02:50.012 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:50.012 [33/37] Linking target samples/lspci 00:02:50.012 [34/37] Linking target samples/null 00:02:50.012 [35/37] Linking target samples/server 00:02:50.012 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:50.013 [37/37] Linking target samples/gpio-pci-idio-16 00:02:50.013 INFO: autodetecting backend as ninja 00:02:50.013 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:50.275 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:50.537 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:50.537 ninja: no work to do. 00:02:57.131 The Meson build system 00:02:57.131 Version: 1.5.0 00:02:57.131 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:57.131 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:57.131 Build type: native build 00:02:57.131 Program cat found: YES (/usr/bin/cat) 00:02:57.131 Project name: DPDK 00:02:57.131 Project version: 24.03.0 00:02:57.131 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:57.131 C linker for the host machine: cc ld.bfd 2.40-14 00:02:57.131 Host machine cpu family: x86_64 00:02:57.131 Host machine cpu: x86_64 00:02:57.131 Message: ## Building in Developer Mode ## 00:02:57.131 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:57.131 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:57.131 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:57.131 Program python3 found: YES (/usr/bin/python3) 00:02:57.131 Program cat found: YES (/usr/bin/cat) 00:02:57.131 Compiler for C supports arguments -march=native: YES 00:02:57.131 Checking for size of "void *" : 8 00:02:57.131 Checking for size of "void *" : 8 (cached) 00:02:57.131 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:57.131 Library m found: YES 00:02:57.131 Library numa found: YES 00:02:57.131 Has header "numaif.h" : YES 00:02:57.131 Library fdt found: NO 00:02:57.131 Library execinfo found: NO 00:02:57.131 Has header "execinfo.h" : YES 00:02:57.131 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:57.131 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:57.131 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:57.131 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:57.131 Run-time dependency openssl found: YES 3.1.1 00:02:57.131 Run-time dependency libpcap found: YES 1.10.4 00:02:57.131 Has header "pcap.h" with dependency libpcap: YES 00:02:57.131 Compiler for C supports arguments -Wcast-qual: YES 00:02:57.131 Compiler for C supports arguments -Wdeprecated: YES 00:02:57.131 Compiler for C supports arguments -Wformat: YES 00:02:57.131 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:57.131 Compiler for C supports arguments -Wformat-security: NO 00:02:57.131 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:57.131 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:57.131 Compiler for C supports arguments -Wnested-externs: YES 00:02:57.131 Compiler for C supports arguments -Wold-style-definition: YES 00:02:57.131 Compiler for C supports arguments -Wpointer-arith: YES 00:02:57.131 Compiler for C supports arguments -Wsign-compare: YES 00:02:57.131 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:57.131 Compiler for C supports arguments -Wundef: YES 00:02:57.131 Compiler for C supports arguments -Wwrite-strings: YES 00:02:57.131 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:57.131 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:57.131 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:57.131 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:57.131 Program objdump found: YES (/usr/bin/objdump) 00:02:57.131 Compiler for C supports arguments -mavx512f: YES 00:02:57.131 Checking if "AVX512 checking" compiles: YES 00:02:57.131 Fetching value of define "__SSE4_2__" : 1 00:02:57.131 Fetching value of define "__AES__" : 1 00:02:57.131 Fetching value of define "__AVX__" : 1 00:02:57.131 Fetching value of define "__AVX2__" : 1 00:02:57.131 Fetching value of define "__AVX512BW__" : 1 00:02:57.131 Fetching value of define "__AVX512CD__" : 1 00:02:57.131 Fetching value of define "__AVX512DQ__" : 1 00:02:57.131 Fetching value of define "__AVX512F__" : 1 00:02:57.131 Fetching value of define "__AVX512VL__" : 1 00:02:57.131 Fetching value of define "__PCLMUL__" : 1 00:02:57.131 Fetching value of define "__RDRND__" : 1 00:02:57.131 Fetching value of define "__RDSEED__" : 1 00:02:57.131 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:57.131 Fetching value of define "__znver1__" : (undefined) 00:02:57.131 Fetching value of define "__znver2__" : (undefined) 00:02:57.131 Fetching value of define "__znver3__" : (undefined) 00:02:57.131 Fetching value of define "__znver4__" : (undefined) 00:02:57.131 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:57.131 Message: lib/log: Defining dependency "log" 00:02:57.131 Message: lib/kvargs: Defining dependency "kvargs" 00:02:57.131 Message: lib/telemetry: Defining dependency "telemetry" 00:02:57.131 Checking for function "getentropy" : NO 00:02:57.131 Message: lib/eal: Defining dependency "eal" 00:02:57.131 Message: lib/ring: Defining dependency "ring" 00:02:57.131 Message: lib/rcu: Defining dependency "rcu" 00:02:57.131 Message: lib/mempool: Defining dependency "mempool" 00:02:57.131 Message: lib/mbuf: Defining dependency "mbuf" 00:02:57.131 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:57.131 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:57.131 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:57.131 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:57.131 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:57.131 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:57.131 Compiler for C supports arguments -mpclmul: YES 00:02:57.131 Compiler for C supports arguments -maes: YES 00:02:57.131 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:57.131 Compiler for C supports arguments -mavx512bw: YES 00:02:57.131 Compiler for C supports arguments -mavx512dq: YES 00:02:57.131 Compiler for C supports arguments -mavx512vl: YES 00:02:57.131 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:57.131 Compiler for C supports arguments -mavx2: YES 00:02:57.131 Compiler for C supports arguments -mavx: YES 00:02:57.131 Message: lib/net: Defining dependency "net" 00:02:57.131 Message: lib/meter: Defining dependency "meter" 00:02:57.131 Message: lib/ethdev: Defining dependency "ethdev" 00:02:57.131 Message: lib/pci: Defining dependency "pci" 00:02:57.131 Message: lib/cmdline: Defining dependency "cmdline" 00:02:57.131 Message: lib/hash: Defining dependency "hash" 00:02:57.131 Message: lib/timer: Defining dependency "timer" 00:02:57.131 Message: lib/compressdev: Defining dependency "compressdev" 00:02:57.131 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:57.131 Message: lib/dmadev: Defining dependency "dmadev" 00:02:57.131 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:57.131 Message: lib/power: Defining dependency "power" 00:02:57.131 Message: lib/reorder: Defining dependency "reorder" 00:02:57.131 Message: lib/security: Defining dependency "security" 00:02:57.131 Has header "linux/userfaultfd.h" : YES 00:02:57.131 Has header "linux/vduse.h" : YES 00:02:57.131 Message: lib/vhost: Defining dependency "vhost" 00:02:57.131 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:57.131 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:57.131 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:57.131 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:57.131 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:57.131 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:57.131 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:57.131 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:57.131 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:57.131 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:57.131 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:57.131 Configuring doxy-api-html.conf using configuration 00:02:57.131 Configuring doxy-api-man.conf using configuration 00:02:57.131 Program mandb found: YES (/usr/bin/mandb) 00:02:57.131 Program sphinx-build found: NO 00:02:57.131 Configuring rte_build_config.h using configuration 00:02:57.131 Message: 00:02:57.131 ================= 00:02:57.131 Applications Enabled 00:02:57.131 ================= 00:02:57.131 00:02:57.132 apps: 00:02:57.132 00:02:57.132 00:02:57.132 Message: 00:02:57.132 ================= 00:02:57.132 Libraries Enabled 00:02:57.132 ================= 00:02:57.132 00:02:57.132 libs: 00:02:57.132 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:57.132 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:57.132 cryptodev, dmadev, power, reorder, security, vhost, 00:02:57.132 00:02:57.132 Message: 00:02:57.132 =============== 00:02:57.132 Drivers Enabled 00:02:57.132 =============== 00:02:57.132 00:02:57.132 common: 00:02:57.132 00:02:57.132 bus: 00:02:57.132 pci, vdev, 00:02:57.132 mempool: 00:02:57.132 ring, 00:02:57.132 dma: 00:02:57.132 00:02:57.132 net: 00:02:57.132 00:02:57.132 crypto: 00:02:57.132 00:02:57.132 compress: 00:02:57.132 00:02:57.132 vdpa: 00:02:57.132 00:02:57.132 00:02:57.132 Message: 00:02:57.132 ================= 00:02:57.132 Content Skipped 00:02:57.132 ================= 00:02:57.132 00:02:57.132 apps: 00:02:57.132 dumpcap: explicitly disabled via build config 00:02:57.132 graph: explicitly disabled via build config 00:02:57.132 pdump: explicitly disabled via build config 00:02:57.132 proc-info: explicitly disabled via build config 00:02:57.132 test-acl: explicitly disabled via build config 00:02:57.132 test-bbdev: explicitly disabled via build config 00:02:57.132 test-cmdline: explicitly disabled via build config 00:02:57.132 test-compress-perf: explicitly disabled via build config 00:02:57.132 test-crypto-perf: explicitly disabled via build config 00:02:57.132 test-dma-perf: explicitly disabled via build config 00:02:57.132 test-eventdev: explicitly disabled via build config 00:02:57.132 test-fib: explicitly disabled via build config 00:02:57.132 test-flow-perf: explicitly disabled via build config 00:02:57.132 test-gpudev: explicitly disabled via build config 00:02:57.132 test-mldev: explicitly disabled via build config 00:02:57.132 test-pipeline: explicitly disabled via build config 00:02:57.132 test-pmd: explicitly disabled via build config 00:02:57.132 test-regex: explicitly disabled via build config 00:02:57.132 test-sad: explicitly disabled via build config 00:02:57.132 test-security-perf: explicitly disabled via build config 00:02:57.132 00:02:57.132 libs: 00:02:57.132 argparse: explicitly disabled via build config 00:02:57.132 metrics: explicitly disabled via build config 00:02:57.132 acl: explicitly disabled via build config 00:02:57.132 bbdev: explicitly disabled via build config 00:02:57.132 bitratestats: explicitly disabled via build config 00:02:57.132 bpf: explicitly disabled via build config 00:02:57.132 cfgfile: explicitly disabled via build config 00:02:57.132 distributor: explicitly disabled via build config 00:02:57.132 efd: explicitly disabled via build config 00:02:57.132 eventdev: explicitly disabled via build config 00:02:57.132 dispatcher: explicitly disabled via build config 00:02:57.132 gpudev: explicitly disabled via build config 00:02:57.132 gro: explicitly disabled via build config 00:02:57.132 gso: explicitly disabled via build config 00:02:57.132 ip_frag: explicitly disabled via build config 00:02:57.132 jobstats: explicitly disabled via build config 00:02:57.132 latencystats: explicitly disabled via build config 00:02:57.132 lpm: explicitly disabled via build config 00:02:57.132 member: explicitly disabled via build config 00:02:57.132 pcapng: explicitly disabled via build config 00:02:57.132 rawdev: explicitly disabled via build config 00:02:57.132 regexdev: explicitly disabled via build config 00:02:57.132 mldev: explicitly disabled via build config 00:02:57.132 rib: explicitly disabled via build config 00:02:57.132 sched: explicitly disabled via build config 00:02:57.132 stack: explicitly disabled via build config 00:02:57.132 ipsec: explicitly disabled via build config 00:02:57.132 pdcp: explicitly disabled via build config 00:02:57.132 fib: explicitly disabled via build config 00:02:57.132 port: explicitly disabled via build config 00:02:57.132 pdump: explicitly disabled via build config 00:02:57.132 table: explicitly disabled via build config 00:02:57.132 pipeline: explicitly disabled via build config 00:02:57.132 graph: explicitly disabled via build config 00:02:57.132 node: explicitly disabled via build config 00:02:57.132 00:02:57.132 drivers: 00:02:57.132 common/cpt: not in enabled drivers build config 00:02:57.132 common/dpaax: not in enabled drivers build config 00:02:57.132 common/iavf: not in enabled drivers build config 00:02:57.132 common/idpf: not in enabled drivers build config 00:02:57.132 common/ionic: not in enabled drivers build config 00:02:57.132 common/mvep: not in enabled drivers build config 00:02:57.132 common/octeontx: not in enabled drivers build config 00:02:57.132 bus/auxiliary: not in enabled drivers build config 00:02:57.132 bus/cdx: not in enabled drivers build config 00:02:57.132 bus/dpaa: not in enabled drivers build config 00:02:57.132 bus/fslmc: not in enabled drivers build config 00:02:57.132 bus/ifpga: not in enabled drivers build config 00:02:57.132 bus/platform: not in enabled drivers build config 00:02:57.132 bus/uacce: not in enabled drivers build config 00:02:57.132 bus/vmbus: not in enabled drivers build config 00:02:57.132 common/cnxk: not in enabled drivers build config 00:02:57.132 common/mlx5: not in enabled drivers build config 00:02:57.132 common/nfp: not in enabled drivers build config 00:02:57.132 common/nitrox: not in enabled drivers build config 00:02:57.132 common/qat: not in enabled drivers build config 00:02:57.132 common/sfc_efx: not in enabled drivers build config 00:02:57.132 mempool/bucket: not in enabled drivers build config 00:02:57.132 mempool/cnxk: not in enabled drivers build config 00:02:57.132 mempool/dpaa: not in enabled drivers build config 00:02:57.132 mempool/dpaa2: not in enabled drivers build config 00:02:57.132 mempool/octeontx: not in enabled drivers build config 00:02:57.132 mempool/stack: not in enabled drivers build config 00:02:57.132 dma/cnxk: not in enabled drivers build config 00:02:57.132 dma/dpaa: not in enabled drivers build config 00:02:57.132 dma/dpaa2: not in enabled drivers build config 00:02:57.132 dma/hisilicon: not in enabled drivers build config 00:02:57.132 dma/idxd: not in enabled drivers build config 00:02:57.132 dma/ioat: not in enabled drivers build config 00:02:57.132 dma/skeleton: not in enabled drivers build config 00:02:57.132 net/af_packet: not in enabled drivers build config 00:02:57.132 net/af_xdp: not in enabled drivers build config 00:02:57.132 net/ark: not in enabled drivers build config 00:02:57.132 net/atlantic: not in enabled drivers build config 00:02:57.132 net/avp: not in enabled drivers build config 00:02:57.132 net/axgbe: not in enabled drivers build config 00:02:57.132 net/bnx2x: not in enabled drivers build config 00:02:57.132 net/bnxt: not in enabled drivers build config 00:02:57.132 net/bonding: not in enabled drivers build config 00:02:57.132 net/cnxk: not in enabled drivers build config 00:02:57.132 net/cpfl: not in enabled drivers build config 00:02:57.132 net/cxgbe: not in enabled drivers build config 00:02:57.132 net/dpaa: not in enabled drivers build config 00:02:57.132 net/dpaa2: not in enabled drivers build config 00:02:57.132 net/e1000: not in enabled drivers build config 00:02:57.132 net/ena: not in enabled drivers build config 00:02:57.132 net/enetc: not in enabled drivers build config 00:02:57.132 net/enetfec: not in enabled drivers build config 00:02:57.132 net/enic: not in enabled drivers build config 00:02:57.132 net/failsafe: not in enabled drivers build config 00:02:57.132 net/fm10k: not in enabled drivers build config 00:02:57.132 net/gve: not in enabled drivers build config 00:02:57.132 net/hinic: not in enabled drivers build config 00:02:57.132 net/hns3: not in enabled drivers build config 00:02:57.132 net/i40e: not in enabled drivers build config 00:02:57.132 net/iavf: not in enabled drivers build config 00:02:57.132 net/ice: not in enabled drivers build config 00:02:57.132 net/idpf: not in enabled drivers build config 00:02:57.132 net/igc: not in enabled drivers build config 00:02:57.132 net/ionic: not in enabled drivers build config 00:02:57.132 net/ipn3ke: not in enabled drivers build config 00:02:57.132 net/ixgbe: not in enabled drivers build config 00:02:57.132 net/mana: not in enabled drivers build config 00:02:57.132 net/memif: not in enabled drivers build config 00:02:57.132 net/mlx4: not in enabled drivers build config 00:02:57.132 net/mlx5: not in enabled drivers build config 00:02:57.132 net/mvneta: not in enabled drivers build config 00:02:57.132 net/mvpp2: not in enabled drivers build config 00:02:57.132 net/netvsc: not in enabled drivers build config 00:02:57.132 net/nfb: not in enabled drivers build config 00:02:57.132 net/nfp: not in enabled drivers build config 00:02:57.132 net/ngbe: not in enabled drivers build config 00:02:57.132 net/null: not in enabled drivers build config 00:02:57.132 net/octeontx: not in enabled drivers build config 00:02:57.132 net/octeon_ep: not in enabled drivers build config 00:02:57.132 net/pcap: not in enabled drivers build config 00:02:57.132 net/pfe: not in enabled drivers build config 00:02:57.132 net/qede: not in enabled drivers build config 00:02:57.132 net/ring: not in enabled drivers build config 00:02:57.132 net/sfc: not in enabled drivers build config 00:02:57.132 net/softnic: not in enabled drivers build config 00:02:57.132 net/tap: not in enabled drivers build config 00:02:57.132 net/thunderx: not in enabled drivers build config 00:02:57.132 net/txgbe: not in enabled drivers build config 00:02:57.132 net/vdev_netvsc: not in enabled drivers build config 00:02:57.132 net/vhost: not in enabled drivers build config 00:02:57.132 net/virtio: not in enabled drivers build config 00:02:57.132 net/vmxnet3: not in enabled drivers build config 00:02:57.132 raw/*: missing internal dependency, "rawdev" 00:02:57.132 crypto/armv8: not in enabled drivers build config 00:02:57.132 crypto/bcmfs: not in enabled drivers build config 00:02:57.132 crypto/caam_jr: not in enabled drivers build config 00:02:57.132 crypto/ccp: not in enabled drivers build config 00:02:57.132 crypto/cnxk: not in enabled drivers build config 00:02:57.132 crypto/dpaa_sec: not in enabled drivers build config 00:02:57.132 crypto/dpaa2_sec: not in enabled drivers build config 00:02:57.133 crypto/ipsec_mb: not in enabled drivers build config 00:02:57.133 crypto/mlx5: not in enabled drivers build config 00:02:57.133 crypto/mvsam: not in enabled drivers build config 00:02:57.133 crypto/nitrox: not in enabled drivers build config 00:02:57.133 crypto/null: not in enabled drivers build config 00:02:57.133 crypto/octeontx: not in enabled drivers build config 00:02:57.133 crypto/openssl: not in enabled drivers build config 00:02:57.133 crypto/scheduler: not in enabled drivers build config 00:02:57.133 crypto/uadk: not in enabled drivers build config 00:02:57.133 crypto/virtio: not in enabled drivers build config 00:02:57.133 compress/isal: not in enabled drivers build config 00:02:57.133 compress/mlx5: not in enabled drivers build config 00:02:57.133 compress/nitrox: not in enabled drivers build config 00:02:57.133 compress/octeontx: not in enabled drivers build config 00:02:57.133 compress/zlib: not in enabled drivers build config 00:02:57.133 regex/*: missing internal dependency, "regexdev" 00:02:57.133 ml/*: missing internal dependency, "mldev" 00:02:57.133 vdpa/ifc: not in enabled drivers build config 00:02:57.133 vdpa/mlx5: not in enabled drivers build config 00:02:57.133 vdpa/nfp: not in enabled drivers build config 00:02:57.133 vdpa/sfc: not in enabled drivers build config 00:02:57.133 event/*: missing internal dependency, "eventdev" 00:02:57.133 baseband/*: missing internal dependency, "bbdev" 00:02:57.133 gpu/*: missing internal dependency, "gpudev" 00:02:57.133 00:02:57.133 00:02:57.133 Build targets in project: 84 00:02:57.133 00:02:57.133 DPDK 24.03.0 00:02:57.133 00:02:57.133 User defined options 00:02:57.133 buildtype : debug 00:02:57.133 default_library : shared 00:02:57.133 libdir : lib 00:02:57.133 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:57.133 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:57.133 c_link_args : 00:02:57.133 cpu_instruction_set: native 00:02:57.133 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:57.133 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:57.133 enable_docs : false 00:02:57.133 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:57.133 enable_kmods : false 00:02:57.133 max_lcores : 128 00:02:57.133 tests : false 00:02:57.133 00:02:57.133 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:57.133 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:57.133 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:57.133 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:57.133 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:57.133 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:57.133 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:57.133 [6/267] Linking static target lib/librte_kvargs.a 00:02:57.133 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:57.133 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:57.133 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:57.133 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:57.133 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:57.133 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:57.133 [13/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:57.133 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:57.133 [15/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:57.133 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:57.133 [17/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:57.133 [18/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:57.133 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:57.133 [20/267] Linking static target lib/librte_log.a 00:02:57.133 [21/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:57.133 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:57.133 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:57.133 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:57.133 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:57.133 [26/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:57.133 [27/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:57.133 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:57.133 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:57.133 [30/267] Linking static target lib/librte_pci.a 00:02:57.392 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:57.392 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:57.392 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:57.392 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:57.392 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:57.392 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:57.392 [37/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:57.392 [38/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:57.392 [39/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.653 [40/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:57.653 [41/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.653 [42/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:57.653 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:57.653 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:57.653 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:57.653 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:57.653 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:57.653 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:57.653 [49/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:57.653 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:57.653 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:57.653 [52/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:57.653 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:57.653 [54/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:57.653 [55/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:57.653 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:57.653 [57/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:57.653 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:57.653 [59/267] Linking static target lib/librte_meter.a 00:02:57.653 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:57.653 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:57.653 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:57.653 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:57.653 [64/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:57.653 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:57.653 [66/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:57.653 [67/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:57.653 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:57.653 [69/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:57.653 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:57.653 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:57.653 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:57.653 [73/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:57.653 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:57.653 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:57.653 [76/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:57.653 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:57.653 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:57.653 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:57.653 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:57.653 [81/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:57.653 [82/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:57.653 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:57.653 [84/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:57.653 [85/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:57.653 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:57.653 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:57.653 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:57.653 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:57.653 [90/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:57.654 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:57.654 [92/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:57.654 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:57.654 [94/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:57.654 [95/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:57.654 [96/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:57.654 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:57.654 [98/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:57.654 [99/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:57.654 [100/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:57.654 [101/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:57.654 [102/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:57.654 [103/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:57.654 [104/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:57.654 [105/267] Linking static target lib/librte_telemetry.a 00:02:57.654 [106/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:57.654 [107/267] Linking static target lib/librte_ring.a 00:02:57.654 [108/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:57.654 [109/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:57.654 [110/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:57.654 [111/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:57.654 [112/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:57.654 [113/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:57.654 [114/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:57.654 [115/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:57.654 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:57.654 [117/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:57.654 [118/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:57.654 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:57.654 [120/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:57.654 [121/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:57.654 [122/267] Linking static target lib/librte_timer.a 00:02:57.654 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:57.654 [124/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:57.654 [125/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:57.654 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:57.654 [127/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:57.654 [128/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:57.654 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:57.654 [130/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:57.654 [131/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:57.654 [132/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:57.654 [133/267] Linking static target lib/librte_mempool.a 00:02:57.654 [134/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:57.654 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:57.654 [136/267] Linking static target lib/librte_cmdline.a 00:02:57.654 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:57.654 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:57.654 [139/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:57.654 [140/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:57.654 [141/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:57.654 [142/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:57.654 [143/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.654 [144/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:57.654 [145/267] Linking static target lib/librte_dmadev.a 00:02:57.654 [146/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:57.654 [147/267] Linking static target lib/librte_rcu.a 00:02:57.654 [148/267] Linking static target lib/librte_compressdev.a 00:02:57.654 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:57.654 [150/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:57.654 [151/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:57.654 [152/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:57.916 [153/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:57.916 [154/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:57.916 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:57.916 [156/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:57.916 [157/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:57.916 [158/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:57.916 [159/267] Linking static target lib/librte_net.a 00:02:57.917 [160/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:57.917 [161/267] Linking target lib/librte_log.so.24.1 00:02:57.917 [162/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:57.917 [163/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:57.917 [164/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:57.917 [165/267] Linking static target lib/librte_reorder.a 00:02:57.917 [166/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:57.917 [167/267] Linking static target lib/librte_security.a 00:02:57.917 [168/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:57.917 [169/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:57.917 [170/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:57.917 [171/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:57.917 [172/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:57.917 [173/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:57.917 [174/267] Linking static target lib/librte_power.a 00:02:57.917 [175/267] Linking static target lib/librte_eal.a 00:02:57.917 [176/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:57.917 [177/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:57.917 [178/267] Linking static target lib/librte_mbuf.a 00:02:57.917 [179/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.917 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:57.917 [181/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:57.917 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:57.917 [183/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:57.917 [184/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:57.917 [185/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:57.917 [186/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:57.917 [187/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:57.917 [188/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.917 [189/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.917 [190/267] Linking static target drivers/librte_bus_vdev.a 00:02:57.917 [191/267] Linking target lib/librte_kvargs.so.24.1 00:02:57.917 [192/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:57.917 [193/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.917 [194/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.917 [195/267] Linking static target lib/librte_hash.a 00:02:58.178 [196/267] Linking static target drivers/librte_bus_pci.a 00:02:58.178 [197/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.178 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:58.178 [199/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:58.178 [200/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.178 [201/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.178 [202/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:58.178 [203/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.178 [204/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.178 [205/267] Linking static target drivers/librte_mempool_ring.a 00:02:58.178 [206/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:58.178 [207/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.178 [208/267] Linking static target lib/librte_cryptodev.a 00:02:58.178 [209/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:58.440 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.440 [211/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.440 [212/267] Linking target lib/librte_telemetry.so.24.1 00:02:58.440 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.440 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:58.440 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.440 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.440 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.701 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:58.701 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:58.701 [220/267] Linking static target lib/librte_ethdev.a 00:02:58.701 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.963 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.963 [223/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.963 [224/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.223 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.223 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.797 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:59.797 [228/267] Linking static target lib/librte_vhost.a 00:03:00.369 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.285 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.869 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.439 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.439 [233/267] Linking target lib/librte_eal.so.24.1 00:03:09.700 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:09.700 [235/267] Linking target lib/librte_ring.so.24.1 00:03:09.700 [236/267] Linking target lib/librte_timer.so.24.1 00:03:09.700 [237/267] Linking target lib/librte_meter.so.24.1 00:03:09.700 [238/267] Linking target lib/librte_dmadev.so.24.1 00:03:09.700 [239/267] Linking target lib/librte_pci.so.24.1 00:03:09.700 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:09.960 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:09.960 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:09.960 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:09.960 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:09.960 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:09.960 [246/267] Linking target lib/librte_mempool.so.24.1 00:03:09.960 [247/267] Linking target lib/librte_rcu.so.24.1 00:03:09.960 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:09.960 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:09.960 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:09.960 [251/267] Linking target lib/librte_mbuf.so.24.1 00:03:09.960 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:10.221 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:10.221 [254/267] Linking target lib/librte_reorder.so.24.1 00:03:10.221 [255/267] Linking target lib/librte_compressdev.so.24.1 00:03:10.221 [256/267] Linking target lib/librte_cryptodev.so.24.1 00:03:10.221 [257/267] Linking target lib/librte_net.so.24.1 00:03:10.481 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:10.481 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:10.481 [260/267] Linking target lib/librte_security.so.24.1 00:03:10.481 [261/267] Linking target lib/librte_cmdline.so.24.1 00:03:10.481 [262/267] Linking target lib/librte_hash.so.24.1 00:03:10.481 [263/267] Linking target lib/librte_ethdev.so.24.1 00:03:10.481 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:10.481 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:10.742 [266/267] Linking target lib/librte_power.so.24.1 00:03:10.742 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:10.742 INFO: autodetecting backend as ninja 00:03:10.742 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:03:14.048 CC lib/log/log.o 00:03:14.048 CC lib/ut/ut.o 00:03:14.048 CC lib/ut_mock/mock.o 00:03:14.048 CC lib/log/log_flags.o 00:03:14.309 CC lib/log/log_deprecated.o 00:03:14.309 LIB libspdk_ut_mock.a 00:03:14.309 LIB libspdk_ut.a 00:03:14.309 LIB libspdk_log.a 00:03:14.310 SO libspdk_ut_mock.so.6.0 00:03:14.310 SO libspdk_ut.so.2.0 00:03:14.310 SO libspdk_log.so.7.1 00:03:14.570 SYMLINK libspdk_ut_mock.so 00:03:14.570 SYMLINK libspdk_ut.so 00:03:14.570 SYMLINK libspdk_log.so 00:03:14.832 CC lib/util/base64.o 00:03:14.832 CC lib/dma/dma.o 00:03:14.832 CC lib/util/bit_array.o 00:03:14.832 CXX lib/trace_parser/trace.o 00:03:14.832 CC lib/util/cpuset.o 00:03:14.832 CC lib/util/crc16.o 00:03:14.832 CC lib/ioat/ioat.o 00:03:14.832 CC lib/util/crc32.o 00:03:14.832 CC lib/util/crc32c.o 00:03:14.832 CC lib/util/crc32_ieee.o 00:03:14.832 CC lib/util/crc64.o 00:03:14.832 CC lib/util/dif.o 00:03:14.832 CC lib/util/fd.o 00:03:14.832 CC lib/util/fd_group.o 00:03:14.832 CC lib/util/file.o 00:03:14.832 CC lib/util/hexlify.o 00:03:14.832 CC lib/util/iov.o 00:03:14.832 CC lib/util/math.o 00:03:14.832 CC lib/util/net.o 00:03:14.832 CC lib/util/pipe.o 00:03:14.832 CC lib/util/strerror_tls.o 00:03:14.832 CC lib/util/string.o 00:03:14.832 CC lib/util/uuid.o 00:03:14.832 CC lib/util/xor.o 00:03:14.832 CC lib/util/zipf.o 00:03:14.832 CC lib/util/md5.o 00:03:15.094 CC lib/vfio_user/host/vfio_user_pci.o 00:03:15.094 CC lib/vfio_user/host/vfio_user.o 00:03:15.094 LIB libspdk_ioat.a 00:03:15.094 LIB libspdk_dma.a 00:03:15.094 SO libspdk_ioat.so.7.0 00:03:15.094 SO libspdk_dma.so.5.0 00:03:15.094 SYMLINK libspdk_ioat.so 00:03:15.094 SYMLINK libspdk_dma.so 00:03:15.354 LIB libspdk_vfio_user.a 00:03:15.354 SO libspdk_vfio_user.so.5.0 00:03:15.354 LIB libspdk_util.a 00:03:15.354 SYMLINK libspdk_vfio_user.so 00:03:15.614 SO libspdk_util.so.10.1 00:03:15.615 SYMLINK libspdk_util.so 00:03:15.615 LIB libspdk_trace_parser.a 00:03:15.876 SO libspdk_trace_parser.so.6.0 00:03:15.876 SYMLINK libspdk_trace_parser.so 00:03:15.876 CC lib/conf/conf.o 00:03:15.876 CC lib/json/json_parse.o 00:03:15.876 CC lib/json/json_util.o 00:03:15.876 CC lib/json/json_write.o 00:03:15.876 CC lib/vmd/vmd.o 00:03:15.876 CC lib/vmd/led.o 00:03:15.876 CC lib/rdma_utils/rdma_utils.o 00:03:15.876 CC lib/idxd/idxd.o 00:03:15.876 CC lib/env_dpdk/env.o 00:03:15.876 CC lib/idxd/idxd_user.o 00:03:15.876 CC lib/env_dpdk/memory.o 00:03:15.876 CC lib/idxd/idxd_kernel.o 00:03:15.876 CC lib/env_dpdk/pci.o 00:03:16.137 CC lib/env_dpdk/init.o 00:03:16.137 CC lib/env_dpdk/threads.o 00:03:16.137 CC lib/env_dpdk/pci_ioat.o 00:03:16.137 CC lib/env_dpdk/pci_virtio.o 00:03:16.137 CC lib/env_dpdk/pci_vmd.o 00:03:16.137 CC lib/env_dpdk/pci_idxd.o 00:03:16.137 CC lib/env_dpdk/pci_event.o 00:03:16.137 CC lib/env_dpdk/sigbus_handler.o 00:03:16.137 CC lib/env_dpdk/pci_dpdk.o 00:03:16.137 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:16.137 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:16.137 LIB libspdk_conf.a 00:03:16.398 SO libspdk_conf.so.6.0 00:03:16.398 LIB libspdk_json.a 00:03:16.398 LIB libspdk_rdma_utils.a 00:03:16.398 SYMLINK libspdk_conf.so 00:03:16.398 SO libspdk_json.so.6.0 00:03:16.398 SO libspdk_rdma_utils.so.1.0 00:03:16.398 SYMLINK libspdk_json.so 00:03:16.398 SYMLINK libspdk_rdma_utils.so 00:03:16.661 LIB libspdk_idxd.a 00:03:16.661 LIB libspdk_vmd.a 00:03:16.661 SO libspdk_idxd.so.12.1 00:03:16.661 SO libspdk_vmd.so.6.0 00:03:16.661 SYMLINK libspdk_idxd.so 00:03:16.661 SYMLINK libspdk_vmd.so 00:03:16.924 CC lib/rdma_provider/common.o 00:03:16.924 CC lib/jsonrpc/jsonrpc_server.o 00:03:16.924 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:16.924 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:16.924 CC lib/jsonrpc/jsonrpc_client.o 00:03:16.924 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:16.924 LIB libspdk_rdma_provider.a 00:03:17.184 LIB libspdk_jsonrpc.a 00:03:17.184 SO libspdk_rdma_provider.so.7.0 00:03:17.184 SO libspdk_jsonrpc.so.6.0 00:03:17.184 SYMLINK libspdk_rdma_provider.so 00:03:17.184 SYMLINK libspdk_jsonrpc.so 00:03:17.184 LIB libspdk_env_dpdk.a 00:03:17.445 SO libspdk_env_dpdk.so.15.1 00:03:17.445 SYMLINK libspdk_env_dpdk.so 00:03:17.445 CC lib/rpc/rpc.o 00:03:17.706 LIB libspdk_rpc.a 00:03:17.706 SO libspdk_rpc.so.6.0 00:03:17.967 SYMLINK libspdk_rpc.so 00:03:18.229 CC lib/keyring/keyring.o 00:03:18.229 CC lib/keyring/keyring_rpc.o 00:03:18.229 CC lib/trace/trace.o 00:03:18.229 CC lib/notify/notify.o 00:03:18.229 CC lib/trace/trace_flags.o 00:03:18.229 CC lib/notify/notify_rpc.o 00:03:18.229 CC lib/trace/trace_rpc.o 00:03:18.490 LIB libspdk_notify.a 00:03:18.490 SO libspdk_notify.so.6.0 00:03:18.490 LIB libspdk_keyring.a 00:03:18.490 LIB libspdk_trace.a 00:03:18.490 SYMLINK libspdk_notify.so 00:03:18.490 SO libspdk_keyring.so.2.0 00:03:18.490 SO libspdk_trace.so.11.0 00:03:18.490 SYMLINK libspdk_keyring.so 00:03:18.490 SYMLINK libspdk_trace.so 00:03:19.062 CC lib/thread/thread.o 00:03:19.062 CC lib/sock/sock.o 00:03:19.062 CC lib/sock/sock_rpc.o 00:03:19.062 CC lib/thread/iobuf.o 00:03:19.322 LIB libspdk_sock.a 00:03:19.322 SO libspdk_sock.so.10.0 00:03:19.322 SYMLINK libspdk_sock.so 00:03:19.894 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:19.894 CC lib/nvme/nvme_ctrlr.o 00:03:19.894 CC lib/nvme/nvme_fabric.o 00:03:19.894 CC lib/nvme/nvme_ns_cmd.o 00:03:19.894 CC lib/nvme/nvme_ns.o 00:03:19.894 CC lib/nvme/nvme_pcie_common.o 00:03:19.894 CC lib/nvme/nvme_pcie.o 00:03:19.894 CC lib/nvme/nvme_qpair.o 00:03:19.894 CC lib/nvme/nvme.o 00:03:19.894 CC lib/nvme/nvme_quirks.o 00:03:19.894 CC lib/nvme/nvme_transport.o 00:03:19.894 CC lib/nvme/nvme_discovery.o 00:03:19.894 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:19.894 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:19.894 CC lib/nvme/nvme_tcp.o 00:03:19.894 CC lib/nvme/nvme_opal.o 00:03:19.894 CC lib/nvme/nvme_io_msg.o 00:03:19.894 CC lib/nvme/nvme_poll_group.o 00:03:19.894 CC lib/nvme/nvme_zns.o 00:03:19.894 CC lib/nvme/nvme_stubs.o 00:03:19.894 CC lib/nvme/nvme_auth.o 00:03:19.894 CC lib/nvme/nvme_cuse.o 00:03:19.894 CC lib/nvme/nvme_vfio_user.o 00:03:19.894 CC lib/nvme/nvme_rdma.o 00:03:20.156 LIB libspdk_thread.a 00:03:20.417 SO libspdk_thread.so.11.0 00:03:20.417 SYMLINK libspdk_thread.so 00:03:20.678 CC lib/vfu_tgt/tgt_endpoint.o 00:03:20.678 CC lib/vfu_tgt/tgt_rpc.o 00:03:20.678 CC lib/blob/blobstore.o 00:03:20.678 CC lib/blob/request.o 00:03:20.678 CC lib/blob/zeroes.o 00:03:20.678 CC lib/blob/blob_bs_dev.o 00:03:20.678 CC lib/virtio/virtio.o 00:03:20.678 CC lib/virtio/virtio_vfio_user.o 00:03:20.678 CC lib/virtio/virtio_vhost_user.o 00:03:20.678 CC lib/init/json_config.o 00:03:20.678 CC lib/virtio/virtio_pci.o 00:03:20.678 CC lib/init/subsystem.o 00:03:20.678 CC lib/accel/accel.o 00:03:20.678 CC lib/accel/accel_rpc.o 00:03:20.678 CC lib/init/subsystem_rpc.o 00:03:20.678 CC lib/init/rpc.o 00:03:20.678 CC lib/accel/accel_sw.o 00:03:20.678 CC lib/fsdev/fsdev.o 00:03:20.678 CC lib/fsdev/fsdev_io.o 00:03:20.678 CC lib/fsdev/fsdev_rpc.o 00:03:20.940 LIB libspdk_init.a 00:03:21.201 SO libspdk_init.so.6.0 00:03:21.201 LIB libspdk_vfu_tgt.a 00:03:21.201 LIB libspdk_virtio.a 00:03:21.201 SO libspdk_vfu_tgt.so.3.0 00:03:21.201 SYMLINK libspdk_init.so 00:03:21.201 SO libspdk_virtio.so.7.0 00:03:21.201 SYMLINK libspdk_vfu_tgt.so 00:03:21.201 SYMLINK libspdk_virtio.so 00:03:21.462 LIB libspdk_fsdev.a 00:03:21.462 SO libspdk_fsdev.so.2.0 00:03:21.462 SYMLINK libspdk_fsdev.so 00:03:21.462 CC lib/event/app.o 00:03:21.462 CC lib/event/reactor.o 00:03:21.462 CC lib/event/log_rpc.o 00:03:21.462 CC lib/event/app_rpc.o 00:03:21.462 CC lib/event/scheduler_static.o 00:03:21.724 LIB libspdk_accel.a 00:03:21.724 LIB libspdk_nvme.a 00:03:21.724 SO libspdk_accel.so.16.0 00:03:21.984 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:21.984 SYMLINK libspdk_accel.so 00:03:21.984 SO libspdk_nvme.so.15.0 00:03:21.984 LIB libspdk_event.a 00:03:21.984 SO libspdk_event.so.14.0 00:03:21.984 SYMLINK libspdk_event.so 00:03:22.246 SYMLINK libspdk_nvme.so 00:03:22.246 CC lib/bdev/bdev.o 00:03:22.246 CC lib/bdev/bdev_rpc.o 00:03:22.246 CC lib/bdev/bdev_zone.o 00:03:22.246 CC lib/bdev/part.o 00:03:22.246 CC lib/bdev/scsi_nvme.o 00:03:22.507 LIB libspdk_fuse_dispatcher.a 00:03:22.507 SO libspdk_fuse_dispatcher.so.1.0 00:03:22.507 SYMLINK libspdk_fuse_dispatcher.so 00:03:23.451 LIB libspdk_blob.a 00:03:23.451 SO libspdk_blob.so.11.0 00:03:23.710 SYMLINK libspdk_blob.so 00:03:23.972 CC lib/blobfs/blobfs.o 00:03:23.972 CC lib/blobfs/tree.o 00:03:23.972 CC lib/lvol/lvol.o 00:03:24.547 LIB libspdk_bdev.a 00:03:24.806 SO libspdk_bdev.so.17.0 00:03:24.806 LIB libspdk_blobfs.a 00:03:24.806 SO libspdk_blobfs.so.10.0 00:03:24.806 SYMLINK libspdk_bdev.so 00:03:24.806 LIB libspdk_lvol.a 00:03:24.806 SYMLINK libspdk_blobfs.so 00:03:24.806 SO libspdk_lvol.so.10.0 00:03:25.069 SYMLINK libspdk_lvol.so 00:03:25.069 CC lib/nbd/nbd.o 00:03:25.069 CC lib/nbd/nbd_rpc.o 00:03:25.069 CC lib/ftl/ftl_core.o 00:03:25.069 CC lib/ublk/ublk.o 00:03:25.069 CC lib/ftl/ftl_init.o 00:03:25.069 CC lib/scsi/dev.o 00:03:25.069 CC lib/ublk/ublk_rpc.o 00:03:25.069 CC lib/nvmf/ctrlr.o 00:03:25.069 CC lib/ftl/ftl_layout.o 00:03:25.069 CC lib/scsi/lun.o 00:03:25.069 CC lib/nvmf/ctrlr_discovery.o 00:03:25.069 CC lib/ftl/ftl_debug.o 00:03:25.069 CC lib/scsi/port.o 00:03:25.069 CC lib/nvmf/ctrlr_bdev.o 00:03:25.069 CC lib/ftl/ftl_io.o 00:03:25.069 CC lib/scsi/scsi.o 00:03:25.069 CC lib/nvmf/subsystem.o 00:03:25.069 CC lib/ftl/ftl_sb.o 00:03:25.069 CC lib/scsi/scsi_bdev.o 00:03:25.069 CC lib/nvmf/nvmf.o 00:03:25.069 CC lib/ftl/ftl_l2p.o 00:03:25.069 CC lib/scsi/scsi_pr.o 00:03:25.069 CC lib/nvmf/nvmf_rpc.o 00:03:25.069 CC lib/ftl/ftl_l2p_flat.o 00:03:25.069 CC lib/nvmf/transport.o 00:03:25.069 CC lib/scsi/scsi_rpc.o 00:03:25.069 CC lib/nvmf/tcp.o 00:03:25.069 CC lib/ftl/ftl_nv_cache.o 00:03:25.069 CC lib/scsi/task.o 00:03:25.069 CC lib/ftl/ftl_band.o 00:03:25.069 CC lib/nvmf/stubs.o 00:03:25.069 CC lib/nvmf/mdns_server.o 00:03:25.069 CC lib/ftl/ftl_band_ops.o 00:03:25.069 CC lib/nvmf/vfio_user.o 00:03:25.069 CC lib/ftl/ftl_writer.o 00:03:25.069 CC lib/nvmf/rdma.o 00:03:25.069 CC lib/ftl/ftl_rq.o 00:03:25.069 CC lib/nvmf/auth.o 00:03:25.069 CC lib/ftl/ftl_reloc.o 00:03:25.069 CC lib/ftl/ftl_l2p_cache.o 00:03:25.069 CC lib/ftl/ftl_p2l.o 00:03:25.069 CC lib/ftl/ftl_p2l_log.o 00:03:25.069 CC lib/ftl/mngt/ftl_mngt.o 00:03:25.069 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:25.069 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:25.069 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:25.069 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:25.069 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:25.069 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:25.069 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:25.069 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:25.069 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:25.069 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:25.069 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:25.069 CC lib/ftl/utils/ftl_conf.o 00:03:25.069 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:25.330 CC lib/ftl/utils/ftl_md.o 00:03:25.330 CC lib/ftl/utils/ftl_mempool.o 00:03:25.330 CC lib/ftl/utils/ftl_bitmap.o 00:03:25.330 CC lib/ftl/utils/ftl_property.o 00:03:25.330 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:25.330 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:25.330 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:25.330 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:25.330 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:25.330 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:25.330 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:25.330 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:25.330 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:25.330 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:25.330 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:25.330 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:25.330 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:25.330 CC lib/ftl/ftl_trace.o 00:03:25.330 CC lib/ftl/base/ftl_base_bdev.o 00:03:25.330 CC lib/ftl/base/ftl_base_dev.o 00:03:25.902 LIB libspdk_nbd.a 00:03:25.902 SO libspdk_nbd.so.7.0 00:03:25.902 SYMLINK libspdk_nbd.so 00:03:25.902 LIB libspdk_scsi.a 00:03:25.902 SO libspdk_scsi.so.9.0 00:03:26.163 LIB libspdk_ublk.a 00:03:26.163 SYMLINK libspdk_scsi.so 00:03:26.163 SO libspdk_ublk.so.3.0 00:03:26.163 SYMLINK libspdk_ublk.so 00:03:26.423 LIB libspdk_ftl.a 00:03:26.423 CC lib/vhost/vhost.o 00:03:26.423 CC lib/vhost/vhost_scsi.o 00:03:26.423 CC lib/vhost/vhost_rpc.o 00:03:26.423 CC lib/vhost/vhost_blk.o 00:03:26.423 CC lib/vhost/rte_vhost_user.o 00:03:26.423 CC lib/iscsi/conn.o 00:03:26.423 CC lib/iscsi/init_grp.o 00:03:26.423 CC lib/iscsi/iscsi.o 00:03:26.423 CC lib/iscsi/param.o 00:03:26.423 CC lib/iscsi/portal_grp.o 00:03:26.423 CC lib/iscsi/tgt_node.o 00:03:26.423 CC lib/iscsi/iscsi_subsystem.o 00:03:26.423 CC lib/iscsi/iscsi_rpc.o 00:03:26.423 CC lib/iscsi/task.o 00:03:26.685 SO libspdk_ftl.so.9.0 00:03:26.947 SYMLINK libspdk_ftl.so 00:03:27.208 LIB libspdk_nvmf.a 00:03:27.208 SO libspdk_nvmf.so.20.0 00:03:27.469 LIB libspdk_vhost.a 00:03:27.469 SO libspdk_vhost.so.8.0 00:03:27.469 SYMLINK libspdk_nvmf.so 00:03:27.469 SYMLINK libspdk_vhost.so 00:03:27.730 LIB libspdk_iscsi.a 00:03:27.730 SO libspdk_iscsi.so.8.0 00:03:27.990 SYMLINK libspdk_iscsi.so 00:03:28.562 CC module/env_dpdk/env_dpdk_rpc.o 00:03:28.562 CC module/vfu_device/vfu_virtio.o 00:03:28.562 CC module/vfu_device/vfu_virtio_blk.o 00:03:28.562 CC module/vfu_device/vfu_virtio_scsi.o 00:03:28.562 CC module/vfu_device/vfu_virtio_rpc.o 00:03:28.562 CC module/vfu_device/vfu_virtio_fs.o 00:03:28.562 LIB libspdk_env_dpdk_rpc.a 00:03:28.562 CC module/scheduler/gscheduler/gscheduler.o 00:03:28.562 CC module/blob/bdev/blob_bdev.o 00:03:28.562 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:28.562 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:28.562 CC module/sock/posix/posix.o 00:03:28.562 CC module/accel/iaa/accel_iaa.o 00:03:28.562 CC module/accel/iaa/accel_iaa_rpc.o 00:03:28.562 CC module/accel/error/accel_error.o 00:03:28.562 CC module/accel/error/accel_error_rpc.o 00:03:28.562 CC module/fsdev/aio/fsdev_aio.o 00:03:28.562 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:28.562 CC module/fsdev/aio/linux_aio_mgr.o 00:03:28.562 CC module/accel/ioat/accel_ioat.o 00:03:28.562 CC module/accel/ioat/accel_ioat_rpc.o 00:03:28.562 CC module/keyring/linux/keyring.o 00:03:28.562 CC module/keyring/file/keyring.o 00:03:28.562 CC module/accel/dsa/accel_dsa.o 00:03:28.562 CC module/accel/dsa/accel_dsa_rpc.o 00:03:28.562 CC module/keyring/file/keyring_rpc.o 00:03:28.562 SO libspdk_env_dpdk_rpc.so.6.0 00:03:28.562 CC module/keyring/linux/keyring_rpc.o 00:03:28.823 SYMLINK libspdk_env_dpdk_rpc.so 00:03:28.824 LIB libspdk_scheduler_gscheduler.a 00:03:28.824 SO libspdk_scheduler_gscheduler.so.4.0 00:03:28.824 LIB libspdk_scheduler_dpdk_governor.a 00:03:28.824 LIB libspdk_keyring_linux.a 00:03:28.824 LIB libspdk_keyring_file.a 00:03:28.824 LIB libspdk_scheduler_dynamic.a 00:03:28.824 LIB libspdk_accel_iaa.a 00:03:28.824 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:28.824 LIB libspdk_accel_ioat.a 00:03:28.824 SO libspdk_keyring_linux.so.1.0 00:03:28.824 LIB libspdk_accel_error.a 00:03:28.824 SO libspdk_keyring_file.so.2.0 00:03:28.824 SO libspdk_scheduler_dynamic.so.4.0 00:03:28.824 SYMLINK libspdk_scheduler_gscheduler.so 00:03:28.824 SO libspdk_accel_iaa.so.3.0 00:03:28.824 SO libspdk_accel_ioat.so.6.0 00:03:28.824 LIB libspdk_blob_bdev.a 00:03:28.824 SO libspdk_accel_error.so.2.0 00:03:28.824 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:29.085 LIB libspdk_accel_dsa.a 00:03:29.085 SYMLINK libspdk_keyring_linux.so 00:03:29.085 SYMLINK libspdk_keyring_file.so 00:03:29.085 SO libspdk_blob_bdev.so.11.0 00:03:29.085 SYMLINK libspdk_scheduler_dynamic.so 00:03:29.085 SYMLINK libspdk_accel_iaa.so 00:03:29.085 SYMLINK libspdk_accel_ioat.so 00:03:29.085 SO libspdk_accel_dsa.so.5.0 00:03:29.085 SYMLINK libspdk_accel_error.so 00:03:29.085 SYMLINK libspdk_blob_bdev.so 00:03:29.085 LIB libspdk_vfu_device.a 00:03:29.085 SYMLINK libspdk_accel_dsa.so 00:03:29.085 SO libspdk_vfu_device.so.3.0 00:03:29.085 SYMLINK libspdk_vfu_device.so 00:03:29.346 LIB libspdk_fsdev_aio.a 00:03:29.346 SO libspdk_fsdev_aio.so.1.0 00:03:29.346 LIB libspdk_sock_posix.a 00:03:29.346 SO libspdk_sock_posix.so.6.0 00:03:29.346 SYMLINK libspdk_fsdev_aio.so 00:03:29.607 SYMLINK libspdk_sock_posix.so 00:03:29.607 CC module/bdev/lvol/vbdev_lvol.o 00:03:29.607 CC module/blobfs/bdev/blobfs_bdev.o 00:03:29.607 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:29.607 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:29.607 CC module/bdev/error/vbdev_error.o 00:03:29.607 CC module/bdev/error/vbdev_error_rpc.o 00:03:29.607 CC module/bdev/gpt/gpt.o 00:03:29.607 CC module/bdev/delay/vbdev_delay.o 00:03:29.607 CC module/bdev/gpt/vbdev_gpt.o 00:03:29.607 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:29.607 CC module/bdev/split/vbdev_split.o 00:03:29.607 CC module/bdev/split/vbdev_split_rpc.o 00:03:29.607 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:29.607 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:29.607 CC module/bdev/null/bdev_null.o 00:03:29.607 CC module/bdev/aio/bdev_aio.o 00:03:29.607 CC module/bdev/nvme/bdev_nvme.o 00:03:29.607 CC module/bdev/null/bdev_null_rpc.o 00:03:29.607 CC module/bdev/aio/bdev_aio_rpc.o 00:03:29.607 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:29.607 CC module/bdev/malloc/bdev_malloc.o 00:03:29.607 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:29.607 CC module/bdev/ftl/bdev_ftl.o 00:03:29.607 CC module/bdev/nvme/nvme_rpc.o 00:03:29.607 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:29.607 CC module/bdev/nvme/bdev_mdns_client.o 00:03:29.607 CC module/bdev/passthru/vbdev_passthru.o 00:03:29.607 CC module/bdev/nvme/vbdev_opal.o 00:03:29.607 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:29.607 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:29.607 CC module/bdev/iscsi/bdev_iscsi.o 00:03:29.607 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:29.607 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:29.607 CC module/bdev/raid/bdev_raid.o 00:03:29.607 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:29.607 CC module/bdev/raid/bdev_raid_rpc.o 00:03:29.607 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:29.607 CC module/bdev/raid/bdev_raid_sb.o 00:03:29.607 CC module/bdev/raid/raid0.o 00:03:29.607 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:29.607 CC module/bdev/raid/raid1.o 00:03:29.607 CC module/bdev/raid/concat.o 00:03:29.868 LIB libspdk_blobfs_bdev.a 00:03:29.868 SO libspdk_blobfs_bdev.so.6.0 00:03:29.868 LIB libspdk_bdev_gpt.a 00:03:29.868 LIB libspdk_bdev_null.a 00:03:29.868 LIB libspdk_bdev_split.a 00:03:29.868 LIB libspdk_bdev_error.a 00:03:30.129 SYMLINK libspdk_blobfs_bdev.so 00:03:30.129 SO libspdk_bdev_gpt.so.6.0 00:03:30.129 LIB libspdk_bdev_ftl.a 00:03:30.130 SO libspdk_bdev_null.so.6.0 00:03:30.130 SO libspdk_bdev_split.so.6.0 00:03:30.130 SO libspdk_bdev_error.so.6.0 00:03:30.130 LIB libspdk_bdev_zone_block.a 00:03:30.130 LIB libspdk_bdev_passthru.a 00:03:30.130 SO libspdk_bdev_ftl.so.6.0 00:03:30.130 LIB libspdk_bdev_aio.a 00:03:30.130 LIB libspdk_bdev_delay.a 00:03:30.130 LIB libspdk_bdev_malloc.a 00:03:30.130 SYMLINK libspdk_bdev_gpt.so 00:03:30.130 SYMLINK libspdk_bdev_null.so 00:03:30.130 SO libspdk_bdev_zone_block.so.6.0 00:03:30.130 SO libspdk_bdev_passthru.so.6.0 00:03:30.130 SO libspdk_bdev_aio.so.6.0 00:03:30.130 SYMLINK libspdk_bdev_split.so 00:03:30.130 LIB libspdk_bdev_iscsi.a 00:03:30.130 SO libspdk_bdev_delay.so.6.0 00:03:30.130 SYMLINK libspdk_bdev_error.so 00:03:30.130 SO libspdk_bdev_malloc.so.6.0 00:03:30.130 SYMLINK libspdk_bdev_ftl.so 00:03:30.130 SO libspdk_bdev_iscsi.so.6.0 00:03:30.130 SYMLINK libspdk_bdev_zone_block.so 00:03:30.130 SYMLINK libspdk_bdev_aio.so 00:03:30.130 SYMLINK libspdk_bdev_passthru.so 00:03:30.130 SYMLINK libspdk_bdev_malloc.so 00:03:30.130 SYMLINK libspdk_bdev_delay.so 00:03:30.130 LIB libspdk_bdev_lvol.a 00:03:30.130 LIB libspdk_bdev_virtio.a 00:03:30.130 SYMLINK libspdk_bdev_iscsi.so 00:03:30.130 SO libspdk_bdev_lvol.so.6.0 00:03:30.130 SO libspdk_bdev_virtio.so.6.0 00:03:30.391 SYMLINK libspdk_bdev_lvol.so 00:03:30.391 SYMLINK libspdk_bdev_virtio.so 00:03:30.653 LIB libspdk_bdev_raid.a 00:03:30.653 SO libspdk_bdev_raid.so.6.0 00:03:30.653 SYMLINK libspdk_bdev_raid.so 00:03:32.039 LIB libspdk_bdev_nvme.a 00:03:32.039 SO libspdk_bdev_nvme.so.7.1 00:03:32.039 SYMLINK libspdk_bdev_nvme.so 00:03:32.980 CC module/event/subsystems/vmd/vmd.o 00:03:32.980 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:32.980 CC module/event/subsystems/iobuf/iobuf.o 00:03:32.980 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:32.980 CC module/event/subsystems/scheduler/scheduler.o 00:03:32.980 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:32.980 CC module/event/subsystems/sock/sock.o 00:03:32.980 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:32.980 CC module/event/subsystems/keyring/keyring.o 00:03:32.980 CC module/event/subsystems/fsdev/fsdev.o 00:03:32.980 LIB libspdk_event_scheduler.a 00:03:32.980 LIB libspdk_event_vhost_blk.a 00:03:32.980 LIB libspdk_event_vmd.a 00:03:32.980 LIB libspdk_event_fsdev.a 00:03:32.980 LIB libspdk_event_keyring.a 00:03:32.980 LIB libspdk_event_vfu_tgt.a 00:03:32.980 LIB libspdk_event_sock.a 00:03:32.980 LIB libspdk_event_iobuf.a 00:03:32.980 SO libspdk_event_scheduler.so.4.0 00:03:32.980 SO libspdk_event_vhost_blk.so.3.0 00:03:32.980 SO libspdk_event_fsdev.so.1.0 00:03:32.980 SO libspdk_event_vmd.so.6.0 00:03:33.242 SO libspdk_event_keyring.so.1.0 00:03:33.242 SO libspdk_event_vfu_tgt.so.3.0 00:03:33.242 SO libspdk_event_sock.so.5.0 00:03:33.242 SO libspdk_event_iobuf.so.3.0 00:03:33.242 SYMLINK libspdk_event_scheduler.so 00:03:33.242 SYMLINK libspdk_event_vhost_blk.so 00:03:33.242 SYMLINK libspdk_event_fsdev.so 00:03:33.242 SYMLINK libspdk_event_vmd.so 00:03:33.242 SYMLINK libspdk_event_keyring.so 00:03:33.242 SYMLINK libspdk_event_vfu_tgt.so 00:03:33.242 SYMLINK libspdk_event_sock.so 00:03:33.242 SYMLINK libspdk_event_iobuf.so 00:03:33.502 CC module/event/subsystems/accel/accel.o 00:03:33.762 LIB libspdk_event_accel.a 00:03:33.762 SO libspdk_event_accel.so.6.0 00:03:33.762 SYMLINK libspdk_event_accel.so 00:03:34.333 CC module/event/subsystems/bdev/bdev.o 00:03:34.333 LIB libspdk_event_bdev.a 00:03:34.333 SO libspdk_event_bdev.so.6.0 00:03:34.594 SYMLINK libspdk_event_bdev.so 00:03:34.854 CC module/event/subsystems/scsi/scsi.o 00:03:34.854 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:34.854 CC module/event/subsystems/nbd/nbd.o 00:03:34.854 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:34.854 CC module/event/subsystems/ublk/ublk.o 00:03:35.114 LIB libspdk_event_nbd.a 00:03:35.114 LIB libspdk_event_scsi.a 00:03:35.114 LIB libspdk_event_ublk.a 00:03:35.114 SO libspdk_event_nbd.so.6.0 00:03:35.114 SO libspdk_event_ublk.so.3.0 00:03:35.114 SO libspdk_event_scsi.so.6.0 00:03:35.114 LIB libspdk_event_nvmf.a 00:03:35.114 SYMLINK libspdk_event_nbd.so 00:03:35.114 SYMLINK libspdk_event_ublk.so 00:03:35.115 SYMLINK libspdk_event_scsi.so 00:03:35.115 SO libspdk_event_nvmf.so.6.0 00:03:35.115 SYMLINK libspdk_event_nvmf.so 00:03:35.375 CC module/event/subsystems/iscsi/iscsi.o 00:03:35.375 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:35.637 LIB libspdk_event_vhost_scsi.a 00:03:35.637 LIB libspdk_event_iscsi.a 00:03:35.637 SO libspdk_event_vhost_scsi.so.3.0 00:03:35.637 SO libspdk_event_iscsi.so.6.0 00:03:35.898 SYMLINK libspdk_event_vhost_scsi.so 00:03:35.898 SYMLINK libspdk_event_iscsi.so 00:03:35.898 SO libspdk.so.6.0 00:03:35.898 SYMLINK libspdk.so 00:03:36.565 CXX app/trace/trace.o 00:03:36.565 CC app/trace_record/trace_record.o 00:03:36.565 CC test/rpc_client/rpc_client_test.o 00:03:36.565 CC app/spdk_lspci/spdk_lspci.o 00:03:36.565 CC app/spdk_top/spdk_top.o 00:03:36.565 TEST_HEADER include/spdk/accel.h 00:03:36.565 TEST_HEADER include/spdk/accel_module.h 00:03:36.565 TEST_HEADER include/spdk/assert.h 00:03:36.565 TEST_HEADER include/spdk/barrier.h 00:03:36.565 CC app/spdk_nvme_perf/perf.o 00:03:36.565 TEST_HEADER include/spdk/base64.h 00:03:36.565 TEST_HEADER include/spdk/bdev.h 00:03:36.565 TEST_HEADER include/spdk/bdev_module.h 00:03:36.565 CC app/spdk_nvme_identify/identify.o 00:03:36.565 TEST_HEADER include/spdk/bdev_zone.h 00:03:36.565 TEST_HEADER include/spdk/bit_array.h 00:03:36.565 CC app/spdk_nvme_discover/discovery_aer.o 00:03:36.565 TEST_HEADER include/spdk/bit_pool.h 00:03:36.565 TEST_HEADER include/spdk/blob_bdev.h 00:03:36.565 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:36.565 TEST_HEADER include/spdk/blobfs.h 00:03:36.565 TEST_HEADER include/spdk/blob.h 00:03:36.565 TEST_HEADER include/spdk/config.h 00:03:36.565 TEST_HEADER include/spdk/conf.h 00:03:36.565 TEST_HEADER include/spdk/cpuset.h 00:03:36.565 TEST_HEADER include/spdk/crc16.h 00:03:36.565 TEST_HEADER include/spdk/crc32.h 00:03:36.565 TEST_HEADER include/spdk/crc64.h 00:03:36.565 TEST_HEADER include/spdk/dif.h 00:03:36.565 TEST_HEADER include/spdk/dma.h 00:03:36.565 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:36.565 TEST_HEADER include/spdk/endian.h 00:03:36.565 TEST_HEADER include/spdk/env_dpdk.h 00:03:36.565 TEST_HEADER include/spdk/event.h 00:03:36.565 TEST_HEADER include/spdk/env.h 00:03:36.565 TEST_HEADER include/spdk/fd_group.h 00:03:36.565 CC app/spdk_dd/spdk_dd.o 00:03:36.565 TEST_HEADER include/spdk/file.h 00:03:36.565 TEST_HEADER include/spdk/fd.h 00:03:36.565 TEST_HEADER include/spdk/fsdev.h 00:03:36.565 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:36.565 TEST_HEADER include/spdk/fsdev_module.h 00:03:36.565 TEST_HEADER include/spdk/ftl.h 00:03:36.565 TEST_HEADER include/spdk/gpt_spec.h 00:03:36.565 TEST_HEADER include/spdk/histogram_data.h 00:03:36.565 TEST_HEADER include/spdk/hexlify.h 00:03:36.565 TEST_HEADER include/spdk/idxd_spec.h 00:03:36.565 TEST_HEADER include/spdk/idxd.h 00:03:36.565 TEST_HEADER include/spdk/init.h 00:03:36.565 CC app/iscsi_tgt/iscsi_tgt.o 00:03:36.565 TEST_HEADER include/spdk/ioat.h 00:03:36.565 TEST_HEADER include/spdk/ioat_spec.h 00:03:36.565 TEST_HEADER include/spdk/iscsi_spec.h 00:03:36.565 CC app/nvmf_tgt/nvmf_main.o 00:03:36.565 TEST_HEADER include/spdk/json.h 00:03:36.565 TEST_HEADER include/spdk/jsonrpc.h 00:03:36.565 TEST_HEADER include/spdk/keyring.h 00:03:36.565 TEST_HEADER include/spdk/keyring_module.h 00:03:36.565 TEST_HEADER include/spdk/log.h 00:03:36.565 TEST_HEADER include/spdk/likely.h 00:03:36.565 CC app/spdk_tgt/spdk_tgt.o 00:03:36.565 TEST_HEADER include/spdk/lvol.h 00:03:36.565 TEST_HEADER include/spdk/md5.h 00:03:36.565 TEST_HEADER include/spdk/mmio.h 00:03:36.565 TEST_HEADER include/spdk/memory.h 00:03:36.565 TEST_HEADER include/spdk/net.h 00:03:36.565 TEST_HEADER include/spdk/nbd.h 00:03:36.565 TEST_HEADER include/spdk/nvme.h 00:03:36.565 TEST_HEADER include/spdk/notify.h 00:03:36.565 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:36.565 TEST_HEADER include/spdk/nvme_intel.h 00:03:36.565 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:36.565 TEST_HEADER include/spdk/nvme_spec.h 00:03:36.565 TEST_HEADER include/spdk/nvme_zns.h 00:03:36.565 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:36.565 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:36.565 TEST_HEADER include/spdk/nvmf.h 00:03:36.565 TEST_HEADER include/spdk/nvmf_transport.h 00:03:36.565 TEST_HEADER include/spdk/nvmf_spec.h 00:03:36.565 TEST_HEADER include/spdk/opal.h 00:03:36.565 TEST_HEADER include/spdk/opal_spec.h 00:03:36.565 TEST_HEADER include/spdk/pci_ids.h 00:03:36.565 TEST_HEADER include/spdk/pipe.h 00:03:36.565 TEST_HEADER include/spdk/queue.h 00:03:36.565 TEST_HEADER include/spdk/rpc.h 00:03:36.565 TEST_HEADER include/spdk/reduce.h 00:03:36.565 TEST_HEADER include/spdk/scheduler.h 00:03:36.565 TEST_HEADER include/spdk/scsi.h 00:03:36.565 TEST_HEADER include/spdk/scsi_spec.h 00:03:36.565 TEST_HEADER include/spdk/stdinc.h 00:03:36.565 TEST_HEADER include/spdk/sock.h 00:03:36.565 TEST_HEADER include/spdk/thread.h 00:03:36.565 TEST_HEADER include/spdk/string.h 00:03:36.565 TEST_HEADER include/spdk/trace_parser.h 00:03:36.565 TEST_HEADER include/spdk/trace.h 00:03:36.565 TEST_HEADER include/spdk/tree.h 00:03:36.565 TEST_HEADER include/spdk/util.h 00:03:36.565 TEST_HEADER include/spdk/ublk.h 00:03:36.565 TEST_HEADER include/spdk/uuid.h 00:03:36.565 TEST_HEADER include/spdk/version.h 00:03:36.565 TEST_HEADER include/spdk/vhost.h 00:03:36.565 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:36.565 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:36.565 TEST_HEADER include/spdk/vmd.h 00:03:36.565 TEST_HEADER include/spdk/xor.h 00:03:36.565 CXX test/cpp_headers/accel.o 00:03:36.565 TEST_HEADER include/spdk/zipf.h 00:03:36.565 CXX test/cpp_headers/accel_module.o 00:03:36.565 CXX test/cpp_headers/assert.o 00:03:36.565 CXX test/cpp_headers/bdev.o 00:03:36.565 CXX test/cpp_headers/base64.o 00:03:36.565 CXX test/cpp_headers/barrier.o 00:03:36.565 CXX test/cpp_headers/bdev_module.o 00:03:36.565 CXX test/cpp_headers/bdev_zone.o 00:03:36.565 CXX test/cpp_headers/blob_bdev.o 00:03:36.565 CXX test/cpp_headers/bit_array.o 00:03:36.565 CXX test/cpp_headers/bit_pool.o 00:03:36.565 CXX test/cpp_headers/blobfs.o 00:03:36.565 CXX test/cpp_headers/blobfs_bdev.o 00:03:36.565 CXX test/cpp_headers/config.o 00:03:36.565 CXX test/cpp_headers/blob.o 00:03:36.565 CXX test/cpp_headers/conf.o 00:03:36.565 CXX test/cpp_headers/cpuset.o 00:03:36.565 CXX test/cpp_headers/crc16.o 00:03:36.565 CXX test/cpp_headers/crc32.o 00:03:36.565 CXX test/cpp_headers/dma.o 00:03:36.565 CXX test/cpp_headers/crc64.o 00:03:36.565 CXX test/cpp_headers/dif.o 00:03:36.565 CXX test/cpp_headers/endian.o 00:03:36.565 CXX test/cpp_headers/env.o 00:03:36.565 CXX test/cpp_headers/env_dpdk.o 00:03:36.565 CXX test/cpp_headers/event.o 00:03:36.565 CXX test/cpp_headers/fd_group.o 00:03:36.565 CXX test/cpp_headers/fd.o 00:03:36.565 CXX test/cpp_headers/fsdev.o 00:03:36.565 CXX test/cpp_headers/fsdev_module.o 00:03:36.566 CXX test/cpp_headers/file.o 00:03:36.566 CXX test/cpp_headers/ftl.o 00:03:36.566 CXX test/cpp_headers/fuse_dispatcher.o 00:03:36.566 CXX test/cpp_headers/histogram_data.o 00:03:36.566 CXX test/cpp_headers/gpt_spec.o 00:03:36.566 CXX test/cpp_headers/hexlify.o 00:03:36.566 CXX test/cpp_headers/idxd_spec.o 00:03:36.566 CXX test/cpp_headers/idxd.o 00:03:36.566 CXX test/cpp_headers/init.o 00:03:36.566 CXX test/cpp_headers/ioat.o 00:03:36.566 CXX test/cpp_headers/ioat_spec.o 00:03:36.566 CXX test/cpp_headers/iscsi_spec.o 00:03:36.566 CXX test/cpp_headers/jsonrpc.o 00:03:36.566 CXX test/cpp_headers/json.o 00:03:36.566 CXX test/cpp_headers/keyring.o 00:03:36.566 CXX test/cpp_headers/likely.o 00:03:36.566 CXX test/cpp_headers/keyring_module.o 00:03:36.566 CXX test/cpp_headers/lvol.o 00:03:36.566 CXX test/cpp_headers/mmio.o 00:03:36.566 CXX test/cpp_headers/log.o 00:03:36.566 CXX test/cpp_headers/md5.o 00:03:36.566 CXX test/cpp_headers/memory.o 00:03:36.566 CXX test/cpp_headers/net.o 00:03:36.566 CXX test/cpp_headers/nbd.o 00:03:36.566 CXX test/cpp_headers/notify.o 00:03:36.566 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:36.566 CXX test/cpp_headers/nvme.o 00:03:36.566 CXX test/cpp_headers/nvme_ocssd.o 00:03:36.566 CXX test/cpp_headers/nvme_intel.o 00:03:36.566 CXX test/cpp_headers/nvmf_cmd.o 00:03:36.566 CXX test/cpp_headers/nvme_zns.o 00:03:36.566 CXX test/cpp_headers/nvme_spec.o 00:03:36.566 CXX test/cpp_headers/nvmf_spec.o 00:03:36.566 CXX test/cpp_headers/nvmf.o 00:03:36.566 CXX test/cpp_headers/nvmf_transport.o 00:03:36.566 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:36.566 CXX test/cpp_headers/pci_ids.o 00:03:36.566 CXX test/cpp_headers/opal.o 00:03:36.566 CXX test/cpp_headers/opal_spec.o 00:03:36.566 CXX test/cpp_headers/pipe.o 00:03:36.566 CXX test/cpp_headers/queue.o 00:03:36.566 CXX test/cpp_headers/reduce.o 00:03:36.566 CC examples/ioat/verify/verify.o 00:03:36.566 CXX test/cpp_headers/rpc.o 00:03:36.566 CXX test/cpp_headers/scheduler.o 00:03:36.566 CC examples/ioat/perf/perf.o 00:03:36.566 CXX test/cpp_headers/scsi.o 00:03:36.566 CC test/env/vtophys/vtophys.o 00:03:36.566 CXX test/cpp_headers/scsi_spec.o 00:03:36.566 CXX test/cpp_headers/sock.o 00:03:36.566 CXX test/cpp_headers/string.o 00:03:36.566 CXX test/cpp_headers/thread.o 00:03:36.566 CXX test/cpp_headers/stdinc.o 00:03:36.566 CXX test/cpp_headers/trace_parser.o 00:03:36.566 CXX test/cpp_headers/trace.o 00:03:36.566 CC examples/util/zipf/zipf.o 00:03:36.566 CXX test/cpp_headers/tree.o 00:03:36.566 CXX test/cpp_headers/util.o 00:03:36.566 CC test/thread/poller_perf/poller_perf.o 00:03:36.566 CXX test/cpp_headers/ublk.o 00:03:36.908 CXX test/cpp_headers/uuid.o 00:03:36.908 CXX test/cpp_headers/version.o 00:03:36.908 CXX test/cpp_headers/vhost.o 00:03:36.908 CXX test/cpp_headers/vfio_user_pci.o 00:03:36.908 CXX test/cpp_headers/vmd.o 00:03:36.908 CXX test/cpp_headers/vfio_user_spec.o 00:03:36.908 CXX test/cpp_headers/zipf.o 00:03:36.908 CXX test/cpp_headers/xor.o 00:03:36.908 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:36.908 CC test/app/stub/stub.o 00:03:36.908 CC test/app/jsoncat/jsoncat.o 00:03:36.908 CC test/app/histogram_perf/histogram_perf.o 00:03:36.908 CC test/env/pci/pci_ut.o 00:03:36.908 CC test/env/memory/memory_ut.o 00:03:36.908 CC test/dma/test_dma/test_dma.o 00:03:36.908 CC app/fio/nvme/fio_plugin.o 00:03:36.908 CC app/fio/bdev/fio_plugin.o 00:03:36.908 CC test/app/bdev_svc/bdev_svc.o 00:03:36.908 LINK rpc_client_test 00:03:36.908 LINK spdk_lspci 00:03:36.908 LINK interrupt_tgt 00:03:37.191 LINK spdk_nvme_discover 00:03:37.191 LINK nvmf_tgt 00:03:37.191 LINK iscsi_tgt 00:03:37.191 LINK spdk_trace_record 00:03:37.191 LINK spdk_tgt 00:03:37.191 CC test/env/mem_callbacks/mem_callbacks.o 00:03:37.191 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:37.453 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:37.453 LINK poller_perf 00:03:37.453 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:37.453 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:37.453 LINK jsoncat 00:03:37.453 LINK env_dpdk_post_init 00:03:37.453 LINK spdk_dd 00:03:37.711 LINK stub 00:03:37.711 LINK zipf 00:03:37.711 LINK vtophys 00:03:37.711 LINK histogram_perf 00:03:37.711 LINK verify 00:03:37.711 LINK ioat_perf 00:03:37.971 LINK bdev_svc 00:03:37.971 LINK spdk_trace 00:03:37.971 LINK nvme_fuzz 00:03:37.971 CC test/event/reactor_perf/reactor_perf.o 00:03:37.971 CC test/event/reactor/reactor.o 00:03:37.971 CC test/event/event_perf/event_perf.o 00:03:37.971 LINK vhost_fuzz 00:03:38.232 CC test/event/app_repeat/app_repeat.o 00:03:38.232 CC test/event/scheduler/scheduler.o 00:03:38.232 LINK pci_ut 00:03:38.232 LINK spdk_nvme_identify 00:03:38.232 LINK spdk_bdev 00:03:38.232 LINK test_dma 00:03:38.232 LINK spdk_nvme 00:03:38.232 LINK spdk_nvme_perf 00:03:38.232 LINK reactor_perf 00:03:38.232 CC examples/vmd/led/led.o 00:03:38.232 LINK reactor 00:03:38.232 CC examples/vmd/lsvmd/lsvmd.o 00:03:38.232 CC examples/sock/hello_world/hello_sock.o 00:03:38.232 LINK event_perf 00:03:38.232 CC examples/idxd/perf/perf.o 00:03:38.232 LINK mem_callbacks 00:03:38.232 CC examples/thread/thread/thread_ex.o 00:03:38.232 LINK spdk_top 00:03:38.232 LINK app_repeat 00:03:38.492 CC app/vhost/vhost.o 00:03:38.492 LINK scheduler 00:03:38.492 LINK lsvmd 00:03:38.492 LINK led 00:03:38.492 LINK hello_sock 00:03:38.754 LINK idxd_perf 00:03:38.754 LINK thread 00:03:38.754 LINK vhost 00:03:38.754 LINK memory_ut 00:03:38.754 CC test/nvme/sgl/sgl.o 00:03:38.754 CC test/nvme/boot_partition/boot_partition.o 00:03:38.754 CC test/nvme/reset/reset.o 00:03:38.754 CC test/nvme/aer/aer.o 00:03:38.754 CC test/nvme/err_injection/err_injection.o 00:03:38.754 CC test/nvme/compliance/nvme_compliance.o 00:03:38.754 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:38.754 CC test/nvme/e2edp/nvme_dp.o 00:03:38.754 CC test/nvme/cuse/cuse.o 00:03:38.754 CC test/nvme/reserve/reserve.o 00:03:38.754 CC test/nvme/startup/startup.o 00:03:38.754 CC test/nvme/connect_stress/connect_stress.o 00:03:38.754 CC test/nvme/overhead/overhead.o 00:03:38.754 CC test/nvme/simple_copy/simple_copy.o 00:03:38.754 CC test/nvme/fused_ordering/fused_ordering.o 00:03:38.754 CC test/nvme/fdp/fdp.o 00:03:38.754 CC test/blobfs/mkfs/mkfs.o 00:03:38.754 CC test/accel/dif/dif.o 00:03:39.016 CC test/lvol/esnap/esnap.o 00:03:39.016 LINK boot_partition 00:03:39.016 LINK err_injection 00:03:39.016 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:39.016 CC examples/nvme/hello_world/hello_world.o 00:03:39.016 LINK startup 00:03:39.016 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:39.016 CC examples/nvme/abort/abort.o 00:03:39.016 CC examples/nvme/reconnect/reconnect.o 00:03:39.016 CC examples/nvme/hotplug/hotplug.o 00:03:39.016 LINK doorbell_aers 00:03:39.016 CC examples/nvme/arbitration/arbitration.o 00:03:39.016 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:39.016 LINK fused_ordering 00:03:39.016 LINK connect_stress 00:03:39.016 LINK reserve 00:03:39.016 LINK simple_copy 00:03:39.016 LINK sgl 00:03:39.016 LINK mkfs 00:03:39.016 LINK reset 00:03:39.277 LINK iscsi_fuzz 00:03:39.277 LINK aer 00:03:39.277 LINK nvme_dp 00:03:39.277 LINK overhead 00:03:39.277 LINK nvme_compliance 00:03:39.277 LINK fdp 00:03:39.277 CC examples/accel/perf/accel_perf.o 00:03:39.277 LINK cmb_copy 00:03:39.277 CC examples/blob/hello_world/hello_blob.o 00:03:39.277 CC examples/blob/cli/blobcli.o 00:03:39.277 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:39.277 LINK pmr_persistence 00:03:39.277 LINK hello_world 00:03:39.277 LINK hotplug 00:03:39.538 LINK arbitration 00:03:39.538 LINK abort 00:03:39.538 LINK reconnect 00:03:39.538 LINK dif 00:03:39.538 LINK nvme_manage 00:03:39.538 LINK hello_blob 00:03:39.538 LINK hello_fsdev 00:03:39.799 LINK accel_perf 00:03:39.799 LINK blobcli 00:03:40.060 LINK cuse 00:03:40.060 CC test/bdev/bdevio/bdevio.o 00:03:40.320 CC examples/bdev/hello_world/hello_bdev.o 00:03:40.320 CC examples/bdev/bdevperf/bdevperf.o 00:03:40.581 LINK bdevio 00:03:40.581 LINK hello_bdev 00:03:41.152 LINK bdevperf 00:03:41.722 CC examples/nvmf/nvmf/nvmf.o 00:03:41.983 LINK nvmf 00:03:43.367 LINK esnap 00:03:43.939 00:03:43.939 real 0m56.611s 00:03:43.939 user 8m10.279s 00:03:43.939 sys 5m38.872s 00:03:43.939 15:58:19 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:43.939 15:58:19 make -- common/autotest_common.sh@10 -- $ set +x 00:03:43.939 ************************************ 00:03:43.939 END TEST make 00:03:43.939 ************************************ 00:03:43.939 15:58:19 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:43.939 15:58:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:43.939 15:58:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:43.939 15:58:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.940 15:58:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:43.940 15:58:19 -- pm/common@44 -- $ pid=954034 00:03:43.940 15:58:19 -- pm/common@50 -- $ kill -TERM 954034 00:03:43.940 15:58:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.940 15:58:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:43.940 15:58:19 -- pm/common@44 -- $ pid=954035 00:03:43.940 15:58:19 -- pm/common@50 -- $ kill -TERM 954035 00:03:43.940 15:58:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.940 15:58:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:43.940 15:58:19 -- pm/common@44 -- $ pid=954037 00:03:43.940 15:58:19 -- pm/common@50 -- $ kill -TERM 954037 00:03:43.940 15:58:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.940 15:58:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:43.940 15:58:19 -- pm/common@44 -- $ pid=954060 00:03:43.940 15:58:19 -- pm/common@50 -- $ sudo -E kill -TERM 954060 00:03:43.940 15:58:19 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:43.940 15:58:19 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:43.940 15:58:19 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:43.940 15:58:19 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:43.940 15:58:19 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:44.202 15:58:19 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:44.202 15:58:19 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:44.202 15:58:19 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:44.202 15:58:19 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:44.202 15:58:19 -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.202 15:58:19 -- scripts/common.sh@336 -- # read -ra ver1 00:03:44.202 15:58:19 -- scripts/common.sh@337 -- # IFS=.-: 00:03:44.202 15:58:19 -- scripts/common.sh@337 -- # read -ra ver2 00:03:44.202 15:58:19 -- scripts/common.sh@338 -- # local 'op=<' 00:03:44.202 15:58:19 -- scripts/common.sh@340 -- # ver1_l=2 00:03:44.202 15:58:19 -- scripts/common.sh@341 -- # ver2_l=1 00:03:44.202 15:58:19 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:44.202 15:58:19 -- scripts/common.sh@344 -- # case "$op" in 00:03:44.202 15:58:19 -- scripts/common.sh@345 -- # : 1 00:03:44.202 15:58:19 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:44.202 15:58:19 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.202 15:58:19 -- scripts/common.sh@365 -- # decimal 1 00:03:44.202 15:58:19 -- scripts/common.sh@353 -- # local d=1 00:03:44.202 15:58:19 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.202 15:58:19 -- scripts/common.sh@355 -- # echo 1 00:03:44.202 15:58:19 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:44.202 15:58:19 -- scripts/common.sh@366 -- # decimal 2 00:03:44.202 15:58:19 -- scripts/common.sh@353 -- # local d=2 00:03:44.202 15:58:19 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.202 15:58:19 -- scripts/common.sh@355 -- # echo 2 00:03:44.202 15:58:19 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:44.202 15:58:19 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:44.202 15:58:19 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:44.202 15:58:19 -- scripts/common.sh@368 -- # return 0 00:03:44.202 15:58:19 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.202 15:58:19 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:44.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.202 --rc genhtml_branch_coverage=1 00:03:44.202 --rc genhtml_function_coverage=1 00:03:44.202 --rc genhtml_legend=1 00:03:44.202 --rc geninfo_all_blocks=1 00:03:44.202 --rc geninfo_unexecuted_blocks=1 00:03:44.202 00:03:44.202 ' 00:03:44.202 15:58:19 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:44.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.202 --rc genhtml_branch_coverage=1 00:03:44.202 --rc genhtml_function_coverage=1 00:03:44.202 --rc genhtml_legend=1 00:03:44.202 --rc geninfo_all_blocks=1 00:03:44.202 --rc geninfo_unexecuted_blocks=1 00:03:44.202 00:03:44.202 ' 00:03:44.202 15:58:19 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:44.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.202 --rc genhtml_branch_coverage=1 00:03:44.202 --rc genhtml_function_coverage=1 00:03:44.202 --rc genhtml_legend=1 00:03:44.202 --rc geninfo_all_blocks=1 00:03:44.202 --rc geninfo_unexecuted_blocks=1 00:03:44.202 00:03:44.202 ' 00:03:44.202 15:58:19 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:44.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.202 --rc genhtml_branch_coverage=1 00:03:44.202 --rc genhtml_function_coverage=1 00:03:44.202 --rc genhtml_legend=1 00:03:44.202 --rc geninfo_all_blocks=1 00:03:44.202 --rc geninfo_unexecuted_blocks=1 00:03:44.202 00:03:44.202 ' 00:03:44.202 15:58:19 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:44.202 15:58:19 -- nvmf/common.sh@7 -- # uname -s 00:03:44.202 15:58:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:44.202 15:58:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:44.202 15:58:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:44.202 15:58:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:44.202 15:58:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:44.202 15:58:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:44.202 15:58:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:44.202 15:58:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:44.202 15:58:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:44.202 15:58:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:44.202 15:58:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:44.202 15:58:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:44.202 15:58:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:44.202 15:58:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:44.202 15:58:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:44.202 15:58:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:44.202 15:58:19 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:44.202 15:58:19 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:44.202 15:58:19 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:44.202 15:58:19 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:44.202 15:58:19 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:44.202 15:58:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.202 15:58:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.202 15:58:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.202 15:58:19 -- paths/export.sh@5 -- # export PATH 00:03:44.202 15:58:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.202 15:58:19 -- nvmf/common.sh@51 -- # : 0 00:03:44.202 15:58:19 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:44.202 15:58:19 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:44.202 15:58:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:44.202 15:58:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:44.202 15:58:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:44.202 15:58:19 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:44.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:44.202 15:58:19 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:44.202 15:58:19 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:44.202 15:58:19 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:44.202 15:58:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:44.202 15:58:19 -- spdk/autotest.sh@32 -- # uname -s 00:03:44.202 15:58:19 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:44.202 15:58:19 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:44.202 15:58:19 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:44.202 15:58:19 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:44.203 15:58:19 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:44.203 15:58:19 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:44.203 15:58:19 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:44.203 15:58:19 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:44.203 15:58:19 -- spdk/autotest.sh@48 -- # udevadm_pid=1020164 00:03:44.203 15:58:19 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:44.203 15:58:19 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:44.203 15:58:19 -- pm/common@17 -- # local monitor 00:03:44.203 15:58:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.203 15:58:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.203 15:58:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.203 15:58:19 -- pm/common@21 -- # date +%s 00:03:44.203 15:58:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.203 15:58:19 -- pm/common@25 -- # sleep 1 00:03:44.203 15:58:19 -- pm/common@21 -- # date +%s 00:03:44.203 15:58:19 -- pm/common@21 -- # date +%s 00:03:44.203 15:58:19 -- pm/common@21 -- # date +%s 00:03:44.203 15:58:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732114699 00:03:44.203 15:58:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732114699 00:03:44.203 15:58:20 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732114700 00:03:44.203 15:58:20 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732114700 00:03:44.203 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732114699_collect-cpu-load.pm.log 00:03:44.203 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732114699_collect-vmstat.pm.log 00:03:44.203 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732114700_collect-cpu-temp.pm.log 00:03:44.203 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732114700_collect-bmc-pm.bmc.pm.log 00:03:45.146 15:58:20 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:45.146 15:58:20 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:45.146 15:58:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:45.146 15:58:21 -- common/autotest_common.sh@10 -- # set +x 00:03:45.146 15:58:21 -- spdk/autotest.sh@59 -- # create_test_list 00:03:45.146 15:58:21 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:45.146 15:58:21 -- common/autotest_common.sh@10 -- # set +x 00:03:45.146 15:58:21 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:45.146 15:58:21 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:45.146 15:58:21 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:45.146 15:58:21 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:45.146 15:58:21 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:45.146 15:58:21 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:45.146 15:58:21 -- common/autotest_common.sh@1457 -- # uname 00:03:45.146 15:58:21 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:45.146 15:58:21 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:45.146 15:58:21 -- common/autotest_common.sh@1477 -- # uname 00:03:45.407 15:58:21 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:45.407 15:58:21 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:45.407 15:58:21 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:45.407 lcov: LCOV version 1.15 00:03:45.407 15:58:21 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:11.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:11.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:16.189 15:58:52 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:16.189 15:58:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.189 15:58:52 -- common/autotest_common.sh@10 -- # set +x 00:04:16.189 15:58:52 -- spdk/autotest.sh@78 -- # rm -f 00:04:16.189 15:58:52 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.397 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:04:20.397 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:04:20.397 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:04:20.397 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:04:20.397 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:04:20.397 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:04:20.397 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:04:20.397 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:04:20.397 0000:65:00.0 (144d a80a): Already using the nvme driver 00:04:20.397 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:04:20.397 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:04:20.397 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:04:20.397 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:04:20.397 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:04:20.397 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:04:20.397 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:04:20.397 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:04:20.397 15:58:56 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:20.397 15:58:56 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:20.397 15:58:56 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:20.397 15:58:56 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:20.397 15:58:56 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:20.397 15:58:56 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:20.397 15:58:56 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:20.397 15:58:56 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:20.397 15:58:56 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:20.397 15:58:56 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:20.397 15:58:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.397 15:58:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:20.397 15:58:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:20.397 15:58:56 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:20.397 15:58:56 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:20.397 No valid GPT data, bailing 00:04:20.397 15:58:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:20.397 15:58:56 -- scripts/common.sh@394 -- # pt= 00:04:20.397 15:58:56 -- scripts/common.sh@395 -- # return 1 00:04:20.397 15:58:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:20.397 1+0 records in 00:04:20.397 1+0 records out 00:04:20.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0019897 s, 527 MB/s 00:04:20.397 15:58:56 -- spdk/autotest.sh@105 -- # sync 00:04:20.397 15:58:56 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:20.397 15:58:56 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:20.397 15:58:56 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:30.395 15:59:04 -- spdk/autotest.sh@111 -- # uname -s 00:04:30.395 15:59:04 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:30.395 15:59:04 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:30.395 15:59:04 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:32.939 Hugepages 00:04:32.939 node hugesize free / total 00:04:32.939 node0 1048576kB 0 / 0 00:04:32.939 node0 2048kB 0 / 0 00:04:32.939 node1 1048576kB 0 / 0 00:04:32.939 node1 2048kB 0 / 0 00:04:32.939 00:04:32.939 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:32.939 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:32.939 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:32.939 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:32.939 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:32.939 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:32.939 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:32.939 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:32.939 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:32.939 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:32.939 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:32.939 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:32.939 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:32.939 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:32.939 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:32.939 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:32.939 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:32.939 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:32.939 15:59:08 -- spdk/autotest.sh@117 -- # uname -s 00:04:32.939 15:59:08 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:32.939 15:59:08 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:32.939 15:59:08 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:36.244 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:36.244 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:36.244 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:36.244 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:36.244 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:36.244 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:36.244 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:36.504 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:36.504 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:36.504 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:36.504 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:36.504 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:36.504 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:36.504 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:36.504 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:36.504 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:38.418 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:38.679 15:59:14 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:39.620 15:59:15 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:39.620 15:59:15 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:39.620 15:59:15 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:39.620 15:59:15 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:39.620 15:59:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:39.620 15:59:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:39.620 15:59:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:39.621 15:59:15 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:39.621 15:59:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:39.621 15:59:15 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:39.621 15:59:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:39.621 15:59:15 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:42.924 Waiting for block devices as requested 00:04:43.185 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:43.185 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:43.185 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:43.453 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:43.453 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:43.453 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:43.720 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:43.720 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:43.720 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:43.981 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:43.981 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:44.242 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:44.242 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:44.242 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:44.503 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:44.503 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:44.503 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:44.763 15:59:20 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:44.763 15:59:20 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:44.763 15:59:20 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:44.763 15:59:20 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:04:44.763 15:59:20 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:44.763 15:59:20 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:44.763 15:59:20 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:44.763 15:59:20 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:44.763 15:59:20 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:44.763 15:59:20 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:45.024 15:59:20 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:45.024 15:59:20 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:45.024 15:59:20 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:45.024 15:59:20 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:04:45.024 15:59:20 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:45.024 15:59:20 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:45.024 15:59:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:45.024 15:59:20 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:45.024 15:59:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:45.024 15:59:20 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:45.024 15:59:20 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:45.024 15:59:20 -- common/autotest_common.sh@1543 -- # continue 00:04:45.024 15:59:20 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:45.024 15:59:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:45.024 15:59:20 -- common/autotest_common.sh@10 -- # set +x 00:04:45.024 15:59:20 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:45.024 15:59:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.024 15:59:20 -- common/autotest_common.sh@10 -- # set +x 00:04:45.024 15:59:20 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:48.324 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:48.324 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:48.324 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:48.585 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:48.585 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:48.585 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:48.585 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:48.585 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:48.585 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:48.585 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:48.585 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:48.585 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:48.585 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:48.585 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:48.585 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:48.585 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:48.585 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:49.155 15:59:24 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:49.155 15:59:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:49.155 15:59:24 -- common/autotest_common.sh@10 -- # set +x 00:04:49.156 15:59:24 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:49.156 15:59:24 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:49.156 15:59:24 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:49.156 15:59:24 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:49.156 15:59:24 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:49.156 15:59:24 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:49.156 15:59:24 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:49.156 15:59:24 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:49.156 15:59:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:49.156 15:59:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:49.156 15:59:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:49.156 15:59:24 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:49.156 15:59:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:49.156 15:59:24 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:49.156 15:59:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:49.156 15:59:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:49.156 15:59:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:49.156 15:59:24 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:04:49.156 15:59:24 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:49.156 15:59:24 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:49.156 15:59:24 -- common/autotest_common.sh@1572 -- # return 0 00:04:49.156 15:59:24 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:49.156 15:59:24 -- common/autotest_common.sh@1580 -- # return 0 00:04:49.156 15:59:24 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:49.156 15:59:24 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:49.156 15:59:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:49.156 15:59:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:49.156 15:59:24 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:49.156 15:59:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:49.156 15:59:24 -- common/autotest_common.sh@10 -- # set +x 00:04:49.156 15:59:24 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:49.156 15:59:24 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:49.156 15:59:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.156 15:59:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.156 15:59:24 -- common/autotest_common.sh@10 -- # set +x 00:04:49.156 ************************************ 00:04:49.156 START TEST env 00:04:49.156 ************************************ 00:04:49.156 15:59:25 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:49.416 * Looking for test storage... 00:04:49.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:49.416 15:59:25 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.416 15:59:25 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.416 15:59:25 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.416 15:59:25 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.416 15:59:25 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.416 15:59:25 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.416 15:59:25 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.417 15:59:25 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.417 15:59:25 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.417 15:59:25 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.417 15:59:25 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.417 15:59:25 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.417 15:59:25 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.417 15:59:25 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.417 15:59:25 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.417 15:59:25 env -- scripts/common.sh@344 -- # case "$op" in 00:04:49.417 15:59:25 env -- scripts/common.sh@345 -- # : 1 00:04:49.417 15:59:25 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.417 15:59:25 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.417 15:59:25 env -- scripts/common.sh@365 -- # decimal 1 00:04:49.417 15:59:25 env -- scripts/common.sh@353 -- # local d=1 00:04:49.417 15:59:25 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.417 15:59:25 env -- scripts/common.sh@355 -- # echo 1 00:04:49.417 15:59:25 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.417 15:59:25 env -- scripts/common.sh@366 -- # decimal 2 00:04:49.417 15:59:25 env -- scripts/common.sh@353 -- # local d=2 00:04:49.417 15:59:25 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.417 15:59:25 env -- scripts/common.sh@355 -- # echo 2 00:04:49.417 15:59:25 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.417 15:59:25 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.417 15:59:25 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.417 15:59:25 env -- scripts/common.sh@368 -- # return 0 00:04:49.417 15:59:25 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.417 15:59:25 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.417 --rc genhtml_branch_coverage=1 00:04:49.417 --rc genhtml_function_coverage=1 00:04:49.417 --rc genhtml_legend=1 00:04:49.417 --rc geninfo_all_blocks=1 00:04:49.417 --rc geninfo_unexecuted_blocks=1 00:04:49.417 00:04:49.417 ' 00:04:49.417 15:59:25 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.417 --rc genhtml_branch_coverage=1 00:04:49.417 --rc genhtml_function_coverage=1 00:04:49.417 --rc genhtml_legend=1 00:04:49.417 --rc geninfo_all_blocks=1 00:04:49.417 --rc geninfo_unexecuted_blocks=1 00:04:49.417 00:04:49.417 ' 00:04:49.417 15:59:25 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.417 --rc genhtml_branch_coverage=1 00:04:49.417 --rc genhtml_function_coverage=1 00:04:49.417 --rc genhtml_legend=1 00:04:49.417 --rc geninfo_all_blocks=1 00:04:49.417 --rc geninfo_unexecuted_blocks=1 00:04:49.417 00:04:49.417 ' 00:04:49.417 15:59:25 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.417 --rc genhtml_branch_coverage=1 00:04:49.417 --rc genhtml_function_coverage=1 00:04:49.417 --rc genhtml_legend=1 00:04:49.417 --rc geninfo_all_blocks=1 00:04:49.417 --rc geninfo_unexecuted_blocks=1 00:04:49.417 00:04:49.417 ' 00:04:49.417 15:59:25 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:49.417 15:59:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.417 15:59:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.417 15:59:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.417 ************************************ 00:04:49.417 START TEST env_memory 00:04:49.417 ************************************ 00:04:49.417 15:59:25 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:49.417 00:04:49.417 00:04:49.417 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.417 http://cunit.sourceforge.net/ 00:04:49.417 00:04:49.417 00:04:49.417 Suite: memory 00:04:49.417 Test: alloc and free memory map ...[2024-11-20 15:59:25.336803] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:49.417 passed 00:04:49.678 Test: mem map translation ...[2024-11-20 15:59:25.362501] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:49.678 [2024-11-20 15:59:25.362533] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:49.678 [2024-11-20 15:59:25.362581] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:49.678 [2024-11-20 15:59:25.362588] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:49.678 passed 00:04:49.678 Test: mem map registration ...[2024-11-20 15:59:25.417838] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:49.678 [2024-11-20 15:59:25.417875] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:49.678 passed 00:04:49.678 Test: mem map adjacent registrations ...passed 00:04:49.678 00:04:49.678 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.678 suites 1 1 n/a 0 0 00:04:49.678 tests 4 4 4 0 0 00:04:49.678 asserts 152 152 152 0 n/a 00:04:49.678 00:04:49.678 Elapsed time = 0.192 seconds 00:04:49.678 00:04:49.678 real 0m0.206s 00:04:49.678 user 0m0.198s 00:04:49.678 sys 0m0.008s 00:04:49.678 15:59:25 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.678 15:59:25 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:49.678 ************************************ 00:04:49.678 END TEST env_memory 00:04:49.678 ************************************ 00:04:49.678 15:59:25 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:49.678 15:59:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.678 15:59:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.678 15:59:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.678 ************************************ 00:04:49.678 START TEST env_vtophys 00:04:49.678 ************************************ 00:04:49.678 15:59:25 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:49.678 EAL: lib.eal log level changed from notice to debug 00:04:49.678 EAL: Detected lcore 0 as core 0 on socket 0 00:04:49.678 EAL: Detected lcore 1 as core 1 on socket 0 00:04:49.678 EAL: Detected lcore 2 as core 2 on socket 0 00:04:49.678 EAL: Detected lcore 3 as core 3 on socket 0 00:04:49.678 EAL: Detected lcore 4 as core 4 on socket 0 00:04:49.678 EAL: Detected lcore 5 as core 5 on socket 0 00:04:49.678 EAL: Detected lcore 6 as core 6 on socket 0 00:04:49.678 EAL: Detected lcore 7 as core 7 on socket 0 00:04:49.678 EAL: Detected lcore 8 as core 8 on socket 0 00:04:49.678 EAL: Detected lcore 9 as core 9 on socket 0 00:04:49.678 EAL: Detected lcore 10 as core 10 on socket 0 00:04:49.678 EAL: Detected lcore 11 as core 11 on socket 0 00:04:49.678 EAL: Detected lcore 12 as core 12 on socket 0 00:04:49.678 EAL: Detected lcore 13 as core 13 on socket 0 00:04:49.678 EAL: Detected lcore 14 as core 14 on socket 0 00:04:49.678 EAL: Detected lcore 15 as core 15 on socket 0 00:04:49.678 EAL: Detected lcore 16 as core 16 on socket 0 00:04:49.678 EAL: Detected lcore 17 as core 17 on socket 0 00:04:49.678 EAL: Detected lcore 18 as core 18 on socket 0 00:04:49.678 EAL: Detected lcore 19 as core 19 on socket 0 00:04:49.678 EAL: Detected lcore 20 as core 20 on socket 0 00:04:49.678 EAL: Detected lcore 21 as core 21 on socket 0 00:04:49.678 EAL: Detected lcore 22 as core 22 on socket 0 00:04:49.678 EAL: Detected lcore 23 as core 23 on socket 0 00:04:49.678 EAL: Detected lcore 24 as core 24 on socket 0 00:04:49.678 EAL: Detected lcore 25 as core 25 on socket 0 00:04:49.678 EAL: Detected lcore 26 as core 26 on socket 0 00:04:49.678 EAL: Detected lcore 27 as core 27 on socket 0 00:04:49.678 EAL: Detected lcore 28 as core 28 on socket 0 00:04:49.678 EAL: Detected lcore 29 as core 29 on socket 0 00:04:49.678 EAL: Detected lcore 30 as core 30 on socket 0 00:04:49.678 EAL: Detected lcore 31 as core 31 on socket 0 00:04:49.678 EAL: Detected lcore 32 as core 32 on socket 0 00:04:49.678 EAL: Detected lcore 33 as core 33 on socket 0 00:04:49.678 EAL: Detected lcore 34 as core 34 on socket 0 00:04:49.678 EAL: Detected lcore 35 as core 35 on socket 0 00:04:49.678 EAL: Detected lcore 36 as core 0 on socket 1 00:04:49.678 EAL: Detected lcore 37 as core 1 on socket 1 00:04:49.678 EAL: Detected lcore 38 as core 2 on socket 1 00:04:49.678 EAL: Detected lcore 39 as core 3 on socket 1 00:04:49.678 EAL: Detected lcore 40 as core 4 on socket 1 00:04:49.678 EAL: Detected lcore 41 as core 5 on socket 1 00:04:49.678 EAL: Detected lcore 42 as core 6 on socket 1 00:04:49.678 EAL: Detected lcore 43 as core 7 on socket 1 00:04:49.678 EAL: Detected lcore 44 as core 8 on socket 1 00:04:49.678 EAL: Detected lcore 45 as core 9 on socket 1 00:04:49.678 EAL: Detected lcore 46 as core 10 on socket 1 00:04:49.678 EAL: Detected lcore 47 as core 11 on socket 1 00:04:49.678 EAL: Detected lcore 48 as core 12 on socket 1 00:04:49.678 EAL: Detected lcore 49 as core 13 on socket 1 00:04:49.678 EAL: Detected lcore 50 as core 14 on socket 1 00:04:49.678 EAL: Detected lcore 51 as core 15 on socket 1 00:04:49.678 EAL: Detected lcore 52 as core 16 on socket 1 00:04:49.678 EAL: Detected lcore 53 as core 17 on socket 1 00:04:49.678 EAL: Detected lcore 54 as core 18 on socket 1 00:04:49.678 EAL: Detected lcore 55 as core 19 on socket 1 00:04:49.678 EAL: Detected lcore 56 as core 20 on socket 1 00:04:49.678 EAL: Detected lcore 57 as core 21 on socket 1 00:04:49.678 EAL: Detected lcore 58 as core 22 on socket 1 00:04:49.678 EAL: Detected lcore 59 as core 23 on socket 1 00:04:49.678 EAL: Detected lcore 60 as core 24 on socket 1 00:04:49.678 EAL: Detected lcore 61 as core 25 on socket 1 00:04:49.678 EAL: Detected lcore 62 as core 26 on socket 1 00:04:49.678 EAL: Detected lcore 63 as core 27 on socket 1 00:04:49.678 EAL: Detected lcore 64 as core 28 on socket 1 00:04:49.678 EAL: Detected lcore 65 as core 29 on socket 1 00:04:49.678 EAL: Detected lcore 66 as core 30 on socket 1 00:04:49.678 EAL: Detected lcore 67 as core 31 on socket 1 00:04:49.678 EAL: Detected lcore 68 as core 32 on socket 1 00:04:49.678 EAL: Detected lcore 69 as core 33 on socket 1 00:04:49.678 EAL: Detected lcore 70 as core 34 on socket 1 00:04:49.678 EAL: Detected lcore 71 as core 35 on socket 1 00:04:49.678 EAL: Detected lcore 72 as core 0 on socket 0 00:04:49.678 EAL: Detected lcore 73 as core 1 on socket 0 00:04:49.678 EAL: Detected lcore 74 as core 2 on socket 0 00:04:49.678 EAL: Detected lcore 75 as core 3 on socket 0 00:04:49.678 EAL: Detected lcore 76 as core 4 on socket 0 00:04:49.678 EAL: Detected lcore 77 as core 5 on socket 0 00:04:49.678 EAL: Detected lcore 78 as core 6 on socket 0 00:04:49.678 EAL: Detected lcore 79 as core 7 on socket 0 00:04:49.678 EAL: Detected lcore 80 as core 8 on socket 0 00:04:49.678 EAL: Detected lcore 81 as core 9 on socket 0 00:04:49.678 EAL: Detected lcore 82 as core 10 on socket 0 00:04:49.678 EAL: Detected lcore 83 as core 11 on socket 0 00:04:49.678 EAL: Detected lcore 84 as core 12 on socket 0 00:04:49.678 EAL: Detected lcore 85 as core 13 on socket 0 00:04:49.678 EAL: Detected lcore 86 as core 14 on socket 0 00:04:49.678 EAL: Detected lcore 87 as core 15 on socket 0 00:04:49.678 EAL: Detected lcore 88 as core 16 on socket 0 00:04:49.678 EAL: Detected lcore 89 as core 17 on socket 0 00:04:49.678 EAL: Detected lcore 90 as core 18 on socket 0 00:04:49.679 EAL: Detected lcore 91 as core 19 on socket 0 00:04:49.679 EAL: Detected lcore 92 as core 20 on socket 0 00:04:49.679 EAL: Detected lcore 93 as core 21 on socket 0 00:04:49.679 EAL: Detected lcore 94 as core 22 on socket 0 00:04:49.679 EAL: Detected lcore 95 as core 23 on socket 0 00:04:49.679 EAL: Detected lcore 96 as core 24 on socket 0 00:04:49.679 EAL: Detected lcore 97 as core 25 on socket 0 00:04:49.679 EAL: Detected lcore 98 as core 26 on socket 0 00:04:49.679 EAL: Detected lcore 99 as core 27 on socket 0 00:04:49.679 EAL: Detected lcore 100 as core 28 on socket 0 00:04:49.679 EAL: Detected lcore 101 as core 29 on socket 0 00:04:49.679 EAL: Detected lcore 102 as core 30 on socket 0 00:04:49.679 EAL: Detected lcore 103 as core 31 on socket 0 00:04:49.679 EAL: Detected lcore 104 as core 32 on socket 0 00:04:49.679 EAL: Detected lcore 105 as core 33 on socket 0 00:04:49.679 EAL: Detected lcore 106 as core 34 on socket 0 00:04:49.679 EAL: Detected lcore 107 as core 35 on socket 0 00:04:49.679 EAL: Detected lcore 108 as core 0 on socket 1 00:04:49.679 EAL: Detected lcore 109 as core 1 on socket 1 00:04:49.679 EAL: Detected lcore 110 as core 2 on socket 1 00:04:49.679 EAL: Detected lcore 111 as core 3 on socket 1 00:04:49.679 EAL: Detected lcore 112 as core 4 on socket 1 00:04:49.679 EAL: Detected lcore 113 as core 5 on socket 1 00:04:49.679 EAL: Detected lcore 114 as core 6 on socket 1 00:04:49.679 EAL: Detected lcore 115 as core 7 on socket 1 00:04:49.679 EAL: Detected lcore 116 as core 8 on socket 1 00:04:49.679 EAL: Detected lcore 117 as core 9 on socket 1 00:04:49.679 EAL: Detected lcore 118 as core 10 on socket 1 00:04:49.679 EAL: Detected lcore 119 as core 11 on socket 1 00:04:49.679 EAL: Detected lcore 120 as core 12 on socket 1 00:04:49.679 EAL: Detected lcore 121 as core 13 on socket 1 00:04:49.679 EAL: Detected lcore 122 as core 14 on socket 1 00:04:49.679 EAL: Detected lcore 123 as core 15 on socket 1 00:04:49.679 EAL: Detected lcore 124 as core 16 on socket 1 00:04:49.679 EAL: Detected lcore 125 as core 17 on socket 1 00:04:49.679 EAL: Detected lcore 126 as core 18 on socket 1 00:04:49.679 EAL: Detected lcore 127 as core 19 on socket 1 00:04:49.679 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:49.679 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:49.679 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:49.679 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:49.679 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:49.679 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:49.679 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:49.679 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:49.679 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:49.679 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:49.679 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:49.679 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:49.679 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:49.679 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:49.679 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:49.679 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:49.679 EAL: Maximum logical cores by configuration: 128 00:04:49.679 EAL: Detected CPU lcores: 128 00:04:49.679 EAL: Detected NUMA nodes: 2 00:04:49.679 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:49.679 EAL: Detected shared linkage of DPDK 00:04:49.679 EAL: No shared files mode enabled, IPC will be disabled 00:04:49.940 EAL: Bus pci wants IOVA as 'DC' 00:04:49.940 EAL: Buses did not request a specific IOVA mode. 00:04:49.940 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:49.940 EAL: Selected IOVA mode 'VA' 00:04:49.940 EAL: Probing VFIO support... 00:04:49.940 EAL: IOMMU type 1 (Type 1) is supported 00:04:49.940 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:49.940 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:49.940 EAL: VFIO support initialized 00:04:49.940 EAL: Ask a virtual area of 0x2e000 bytes 00:04:49.940 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:49.940 EAL: Setting up physically contiguous memory... 00:04:49.940 EAL: Setting maximum number of open files to 524288 00:04:49.940 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:49.940 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:49.940 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:49.940 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.940 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:49.940 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.940 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.940 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:49.940 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:49.940 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.940 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:49.940 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.940 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.940 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:49.940 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:49.940 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.940 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:49.940 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.940 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.940 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:49.940 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:49.940 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.940 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:49.940 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.940 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.940 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:49.940 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:49.940 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:49.940 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.940 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:49.940 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.940 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.940 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:49.940 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:49.940 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.940 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:49.940 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.940 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.940 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:49.940 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:49.940 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.940 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:49.940 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.940 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.940 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:49.940 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:49.940 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.940 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:49.940 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.940 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.940 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:49.940 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:49.940 EAL: Hugepages will be freed exactly as allocated. 00:04:49.940 EAL: No shared files mode enabled, IPC is disabled 00:04:49.940 EAL: No shared files mode enabled, IPC is disabled 00:04:49.940 EAL: TSC frequency is ~2400000 KHz 00:04:49.940 EAL: Main lcore 0 is ready (tid=7ff0d6b1fa00;cpuset=[0]) 00:04:49.940 EAL: Trying to obtain current memory policy. 00:04:49.940 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.940 EAL: Restoring previous memory policy: 0 00:04:49.940 EAL: request: mp_malloc_sync 00:04:49.940 EAL: No shared files mode enabled, IPC is disabled 00:04:49.940 EAL: Heap on socket 0 was expanded by 2MB 00:04:49.940 EAL: No shared files mode enabled, IPC is disabled 00:04:49.940 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:49.940 EAL: Mem event callback 'spdk:(nil)' registered 00:04:49.940 00:04:49.940 00:04:49.940 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.940 http://cunit.sourceforge.net/ 00:04:49.940 00:04:49.940 00:04:49.940 Suite: components_suite 00:04:49.940 Test: vtophys_malloc_test ...passed 00:04:49.940 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:49.940 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.940 EAL: Restoring previous memory policy: 4 00:04:49.940 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.940 EAL: request: mp_malloc_sync 00:04:49.940 EAL: No shared files mode enabled, IPC is disabled 00:04:49.940 EAL: Heap on socket 0 was expanded by 4MB 00:04:49.940 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.940 EAL: request: mp_malloc_sync 00:04:49.940 EAL: No shared files mode enabled, IPC is disabled 00:04:49.940 EAL: Heap on socket 0 was shrunk by 4MB 00:04:49.940 EAL: Trying to obtain current memory policy. 00:04:49.940 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.940 EAL: Restoring previous memory policy: 4 00:04:49.940 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.940 EAL: request: mp_malloc_sync 00:04:49.940 EAL: No shared files mode enabled, IPC is disabled 00:04:49.940 EAL: Heap on socket 0 was expanded by 6MB 00:04:49.940 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.940 EAL: request: mp_malloc_sync 00:04:49.940 EAL: No shared files mode enabled, IPC is disabled 00:04:49.940 EAL: Heap on socket 0 was shrunk by 6MB 00:04:49.940 EAL: Trying to obtain current memory policy. 00:04:49.940 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.940 EAL: Restoring previous memory policy: 4 00:04:49.940 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.940 EAL: request: mp_malloc_sync 00:04:49.940 EAL: No shared files mode enabled, IPC is disabled 00:04:49.940 EAL: Heap on socket 0 was expanded by 10MB 00:04:49.940 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.940 EAL: request: mp_malloc_sync 00:04:49.940 EAL: No shared files mode enabled, IPC is disabled 00:04:49.940 EAL: Heap on socket 0 was shrunk by 10MB 00:04:49.940 EAL: Trying to obtain current memory policy. 00:04:49.940 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.940 EAL: Restoring previous memory policy: 4 00:04:49.940 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.940 EAL: request: mp_malloc_sync 00:04:49.940 EAL: No shared files mode enabled, IPC is disabled 00:04:49.940 EAL: Heap on socket 0 was expanded by 18MB 00:04:49.940 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.940 EAL: request: mp_malloc_sync 00:04:49.940 EAL: No shared files mode enabled, IPC is disabled 00:04:49.940 EAL: Heap on socket 0 was shrunk by 18MB 00:04:49.940 EAL: Trying to obtain current memory policy. 00:04:49.940 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.940 EAL: Restoring previous memory policy: 4 00:04:49.940 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.940 EAL: request: mp_malloc_sync 00:04:49.940 EAL: No shared files mode enabled, IPC is disabled 00:04:49.940 EAL: Heap on socket 0 was expanded by 34MB 00:04:49.940 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.940 EAL: request: mp_malloc_sync 00:04:49.940 EAL: No shared files mode enabled, IPC is disabled 00:04:49.940 EAL: Heap on socket 0 was shrunk by 34MB 00:04:49.940 EAL: Trying to obtain current memory policy. 00:04:49.940 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.940 EAL: Restoring previous memory policy: 4 00:04:49.940 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.940 EAL: request: mp_malloc_sync 00:04:49.940 EAL: No shared files mode enabled, IPC is disabled 00:04:49.940 EAL: Heap on socket 0 was expanded by 66MB 00:04:49.940 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.940 EAL: request: mp_malloc_sync 00:04:49.940 EAL: No shared files mode enabled, IPC is disabled 00:04:49.940 EAL: Heap on socket 0 was shrunk by 66MB 00:04:49.940 EAL: Trying to obtain current memory policy. 00:04:49.940 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.940 EAL: Restoring previous memory policy: 4 00:04:49.940 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.940 EAL: request: mp_malloc_sync 00:04:49.940 EAL: No shared files mode enabled, IPC is disabled 00:04:49.940 EAL: Heap on socket 0 was expanded by 130MB 00:04:49.940 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.940 EAL: request: mp_malloc_sync 00:04:49.940 EAL: No shared files mode enabled, IPC is disabled 00:04:49.940 EAL: Heap on socket 0 was shrunk by 130MB 00:04:49.940 EAL: Trying to obtain current memory policy. 00:04:49.940 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.940 EAL: Restoring previous memory policy: 4 00:04:49.940 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.940 EAL: request: mp_malloc_sync 00:04:49.940 EAL: No shared files mode enabled, IPC is disabled 00:04:49.940 EAL: Heap on socket 0 was expanded by 258MB 00:04:49.940 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.201 EAL: request: mp_malloc_sync 00:04:50.201 EAL: No shared files mode enabled, IPC is disabled 00:04:50.201 EAL: Heap on socket 0 was shrunk by 258MB 00:04:50.201 EAL: Trying to obtain current memory policy. 00:04:50.201 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.201 EAL: Restoring previous memory policy: 4 00:04:50.201 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.201 EAL: request: mp_malloc_sync 00:04:50.201 EAL: No shared files mode enabled, IPC is disabled 00:04:50.201 EAL: Heap on socket 0 was expanded by 514MB 00:04:50.201 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.201 EAL: request: mp_malloc_sync 00:04:50.201 EAL: No shared files mode enabled, IPC is disabled 00:04:50.201 EAL: Heap on socket 0 was shrunk by 514MB 00:04:50.201 EAL: Trying to obtain current memory policy. 00:04:50.201 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.461 EAL: Restoring previous memory policy: 4 00:04:50.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.461 EAL: request: mp_malloc_sync 00:04:50.461 EAL: No shared files mode enabled, IPC is disabled 00:04:50.461 EAL: Heap on socket 0 was expanded by 1026MB 00:04:50.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.777 EAL: request: mp_malloc_sync 00:04:50.777 EAL: No shared files mode enabled, IPC is disabled 00:04:50.777 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:50.777 passed 00:04:50.777 00:04:50.777 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.777 suites 1 1 n/a 0 0 00:04:50.777 tests 2 2 2 0 0 00:04:50.777 asserts 497 497 497 0 n/a 00:04:50.777 00:04:50.777 Elapsed time = 0.688 seconds 00:04:50.777 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.777 EAL: request: mp_malloc_sync 00:04:50.777 EAL: No shared files mode enabled, IPC is disabled 00:04:50.777 EAL: Heap on socket 0 was shrunk by 2MB 00:04:50.777 EAL: No shared files mode enabled, IPC is disabled 00:04:50.777 EAL: No shared files mode enabled, IPC is disabled 00:04:50.777 EAL: No shared files mode enabled, IPC is disabled 00:04:50.777 00:04:50.777 real 0m0.836s 00:04:50.777 user 0m0.444s 00:04:50.777 sys 0m0.368s 00:04:50.777 15:59:26 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.777 15:59:26 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:50.777 ************************************ 00:04:50.777 END TEST env_vtophys 00:04:50.777 ************************************ 00:04:50.777 15:59:26 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:50.777 15:59:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.777 15:59:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.777 15:59:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.777 ************************************ 00:04:50.777 START TEST env_pci 00:04:50.777 ************************************ 00:04:50.777 15:59:26 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:50.777 00:04:50.777 00:04:50.777 CUnit - A unit testing framework for C - Version 2.1-3 00:04:50.777 http://cunit.sourceforge.net/ 00:04:50.777 00:04:50.777 00:04:50.777 Suite: pci 00:04:50.777 Test: pci_hook ...[2024-11-20 15:59:26.511387] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1039574 has claimed it 00:04:50.777 EAL: Cannot find device (10000:00:01.0) 00:04:50.777 EAL: Failed to attach device on primary process 00:04:50.777 passed 00:04:50.777 00:04:50.777 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.777 suites 1 1 n/a 0 0 00:04:50.777 tests 1 1 1 0 0 00:04:50.777 asserts 25 25 25 0 n/a 00:04:50.777 00:04:50.777 Elapsed time = 0.030 seconds 00:04:50.777 00:04:50.777 real 0m0.052s 00:04:50.777 user 0m0.021s 00:04:50.777 sys 0m0.030s 00:04:50.777 15:59:26 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.777 15:59:26 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:50.777 ************************************ 00:04:50.778 END TEST env_pci 00:04:50.778 ************************************ 00:04:50.778 15:59:26 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:50.778 15:59:26 env -- env/env.sh@15 -- # uname 00:04:50.778 15:59:26 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:50.778 15:59:26 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:50.778 15:59:26 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:50.778 15:59:26 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:50.778 15:59:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.778 15:59:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.778 ************************************ 00:04:50.778 START TEST env_dpdk_post_init 00:04:50.778 ************************************ 00:04:50.778 15:59:26 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:50.778 EAL: Detected CPU lcores: 128 00:04:50.778 EAL: Detected NUMA nodes: 2 00:04:50.778 EAL: Detected shared linkage of DPDK 00:04:51.093 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:51.093 EAL: Selected IOVA mode 'VA' 00:04:51.093 EAL: VFIO support initialized 00:04:51.093 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:51.093 EAL: Using IOMMU type 1 (Type 1) 00:04:51.093 EAL: Ignore mapping IO port bar(1) 00:04:51.380 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:51.380 EAL: Ignore mapping IO port bar(1) 00:04:51.380 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:51.642 EAL: Ignore mapping IO port bar(1) 00:04:51.642 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:51.904 EAL: Ignore mapping IO port bar(1) 00:04:51.904 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:51.904 EAL: Ignore mapping IO port bar(1) 00:04:52.166 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:52.166 EAL: Ignore mapping IO port bar(1) 00:04:52.427 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:52.427 EAL: Ignore mapping IO port bar(1) 00:04:52.427 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:52.688 EAL: Ignore mapping IO port bar(1) 00:04:52.688 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:52.948 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:53.209 EAL: Ignore mapping IO port bar(1) 00:04:53.209 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:53.470 EAL: Ignore mapping IO port bar(1) 00:04:53.470 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:53.470 EAL: Ignore mapping IO port bar(1) 00:04:53.733 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:53.733 EAL: Ignore mapping IO port bar(1) 00:04:53.993 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:53.993 EAL: Ignore mapping IO port bar(1) 00:04:53.993 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:54.254 EAL: Ignore mapping IO port bar(1) 00:04:54.254 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:54.515 EAL: Ignore mapping IO port bar(1) 00:04:54.515 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:54.777 EAL: Ignore mapping IO port bar(1) 00:04:54.777 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:54.777 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:54.777 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:55.037 Starting DPDK initialization... 00:04:55.037 Starting SPDK post initialization... 00:04:55.037 SPDK NVMe probe 00:04:55.037 Attaching to 0000:65:00.0 00:04:55.037 Attached to 0000:65:00.0 00:04:55.037 Cleaning up... 00:04:56.951 00:04:56.951 real 0m5.749s 00:04:56.951 user 0m0.111s 00:04:56.951 sys 0m0.190s 00:04:56.951 15:59:32 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.951 15:59:32 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.951 ************************************ 00:04:56.951 END TEST env_dpdk_post_init 00:04:56.951 ************************************ 00:04:56.951 15:59:32 env -- env/env.sh@26 -- # uname 00:04:56.951 15:59:32 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:56.951 15:59:32 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:56.951 15:59:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.951 15:59:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.951 15:59:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.951 ************************************ 00:04:56.951 START TEST env_mem_callbacks 00:04:56.951 ************************************ 00:04:56.951 15:59:32 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:56.951 EAL: Detected CPU lcores: 128 00:04:56.951 EAL: Detected NUMA nodes: 2 00:04:56.951 EAL: Detected shared linkage of DPDK 00:04:56.951 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:56.951 EAL: Selected IOVA mode 'VA' 00:04:56.951 EAL: VFIO support initialized 00:04:56.951 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:56.951 00:04:56.951 00:04:56.951 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.951 http://cunit.sourceforge.net/ 00:04:56.951 00:04:56.951 00:04:56.951 Suite: memory 00:04:56.951 Test: test ... 00:04:56.951 register 0x200000200000 2097152 00:04:56.951 malloc 3145728 00:04:56.951 register 0x200000400000 4194304 00:04:56.951 buf 0x200000500000 len 3145728 PASSED 00:04:56.951 malloc 64 00:04:56.951 buf 0x2000004fff40 len 64 PASSED 00:04:56.951 malloc 4194304 00:04:56.951 register 0x200000800000 6291456 00:04:56.951 buf 0x200000a00000 len 4194304 PASSED 00:04:56.951 free 0x200000500000 3145728 00:04:56.951 free 0x2000004fff40 64 00:04:56.951 unregister 0x200000400000 4194304 PASSED 00:04:56.951 free 0x200000a00000 4194304 00:04:56.951 unregister 0x200000800000 6291456 PASSED 00:04:56.951 malloc 8388608 00:04:56.951 register 0x200000400000 10485760 00:04:56.951 buf 0x200000600000 len 8388608 PASSED 00:04:56.951 free 0x200000600000 8388608 00:04:56.951 unregister 0x200000400000 10485760 PASSED 00:04:56.951 passed 00:04:56.951 00:04:56.951 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.951 suites 1 1 n/a 0 0 00:04:56.951 tests 1 1 1 0 0 00:04:56.951 asserts 15 15 15 0 n/a 00:04:56.952 00:04:56.952 Elapsed time = 0.010 seconds 00:04:56.952 00:04:56.952 real 0m0.071s 00:04:56.952 user 0m0.017s 00:04:56.952 sys 0m0.054s 00:04:56.952 15:59:32 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.952 15:59:32 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:56.952 ************************************ 00:04:56.952 END TEST env_mem_callbacks 00:04:56.952 ************************************ 00:04:56.952 00:04:56.952 real 0m7.545s 00:04:56.952 user 0m1.060s 00:04:56.952 sys 0m1.046s 00:04:56.952 15:59:32 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.952 15:59:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.952 ************************************ 00:04:56.952 END TEST env 00:04:56.952 ************************************ 00:04:56.952 15:59:32 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:56.952 15:59:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.952 15:59:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.952 15:59:32 -- common/autotest_common.sh@10 -- # set +x 00:04:56.952 ************************************ 00:04:56.952 START TEST rpc 00:04:56.952 ************************************ 00:04:56.952 15:59:32 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:56.952 * Looking for test storage... 00:04:56.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:56.952 15:59:32 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:56.952 15:59:32 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:56.952 15:59:32 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:56.952 15:59:32 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:56.952 15:59:32 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.952 15:59:32 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.952 15:59:32 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.952 15:59:32 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.952 15:59:32 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.952 15:59:32 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.952 15:59:32 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.952 15:59:32 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.952 15:59:32 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.952 15:59:32 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.952 15:59:32 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.952 15:59:32 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:56.952 15:59:32 rpc -- scripts/common.sh@345 -- # : 1 00:04:56.952 15:59:32 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.952 15:59:32 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.952 15:59:32 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:56.952 15:59:32 rpc -- scripts/common.sh@353 -- # local d=1 00:04:56.952 15:59:32 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.952 15:59:32 rpc -- scripts/common.sh@355 -- # echo 1 00:04:56.952 15:59:32 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.952 15:59:32 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:56.952 15:59:32 rpc -- scripts/common.sh@353 -- # local d=2 00:04:56.952 15:59:32 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.952 15:59:32 rpc -- scripts/common.sh@355 -- # echo 2 00:04:56.952 15:59:32 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.952 15:59:32 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.952 15:59:32 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.952 15:59:32 rpc -- scripts/common.sh@368 -- # return 0 00:04:56.952 15:59:32 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.952 15:59:32 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:56.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.952 --rc genhtml_branch_coverage=1 00:04:56.952 --rc genhtml_function_coverage=1 00:04:56.952 --rc genhtml_legend=1 00:04:56.952 --rc geninfo_all_blocks=1 00:04:56.952 --rc geninfo_unexecuted_blocks=1 00:04:56.952 00:04:56.952 ' 00:04:56.952 15:59:32 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:56.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.952 --rc genhtml_branch_coverage=1 00:04:56.952 --rc genhtml_function_coverage=1 00:04:56.952 --rc genhtml_legend=1 00:04:56.952 --rc geninfo_all_blocks=1 00:04:56.952 --rc geninfo_unexecuted_blocks=1 00:04:56.952 00:04:56.952 ' 00:04:56.952 15:59:32 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:56.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.952 --rc genhtml_branch_coverage=1 00:04:56.952 --rc genhtml_function_coverage=1 00:04:56.952 --rc genhtml_legend=1 00:04:56.952 --rc geninfo_all_blocks=1 00:04:56.952 --rc geninfo_unexecuted_blocks=1 00:04:56.952 00:04:56.952 ' 00:04:56.952 15:59:32 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:56.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.952 --rc genhtml_branch_coverage=1 00:04:56.952 --rc genhtml_function_coverage=1 00:04:56.952 --rc genhtml_legend=1 00:04:56.952 --rc geninfo_all_blocks=1 00:04:56.952 --rc geninfo_unexecuted_blocks=1 00:04:56.952 00:04:56.952 ' 00:04:56.952 15:59:32 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1040927 00:04:56.952 15:59:32 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.952 15:59:32 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1040927 00:04:56.952 15:59:32 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:56.952 15:59:32 rpc -- common/autotest_common.sh@835 -- # '[' -z 1040927 ']' 00:04:56.952 15:59:32 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.952 15:59:32 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.952 15:59:32 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.952 15:59:32 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.952 15:59:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.213 [2024-11-20 15:59:32.941983] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:04:57.213 [2024-11-20 15:59:32.942050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040927 ] 00:04:57.213 [2024-11-20 15:59:33.035857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.213 [2024-11-20 15:59:33.087619] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:57.213 [2024-11-20 15:59:33.087685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1040927' to capture a snapshot of events at runtime. 00:04:57.213 [2024-11-20 15:59:33.087694] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:57.213 [2024-11-20 15:59:33.087701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:57.213 [2024-11-20 15:59:33.087708] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1040927 for offline analysis/debug. 00:04:57.213 [2024-11-20 15:59:33.088522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.155 15:59:33 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.155 15:59:33 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:58.155 15:59:33 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:58.155 15:59:33 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:58.155 15:59:33 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:58.155 15:59:33 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:58.155 15:59:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.155 15:59:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.155 15:59:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.155 ************************************ 00:04:58.155 START TEST rpc_integrity 00:04:58.155 ************************************ 00:04:58.155 15:59:33 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:58.155 15:59:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:58.155 15:59:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.155 15:59:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.155 15:59:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.155 15:59:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:58.155 15:59:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:58.155 15:59:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:58.155 15:59:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:58.155 15:59:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.155 15:59:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.155 15:59:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.155 15:59:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:58.155 15:59:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:58.155 15:59:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.155 15:59:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.155 15:59:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.156 15:59:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:58.156 { 00:04:58.156 "name": "Malloc0", 00:04:58.156 "aliases": [ 00:04:58.156 "055c0251-28b6-4a47-95fe-34eb8ab49d74" 00:04:58.156 ], 00:04:58.156 "product_name": "Malloc disk", 00:04:58.156 "block_size": 512, 00:04:58.156 "num_blocks": 16384, 00:04:58.156 "uuid": "055c0251-28b6-4a47-95fe-34eb8ab49d74", 00:04:58.156 "assigned_rate_limits": { 00:04:58.156 "rw_ios_per_sec": 0, 00:04:58.156 "rw_mbytes_per_sec": 0, 00:04:58.156 "r_mbytes_per_sec": 0, 00:04:58.156 "w_mbytes_per_sec": 0 00:04:58.156 }, 00:04:58.156 "claimed": false, 00:04:58.156 "zoned": false, 00:04:58.156 "supported_io_types": { 00:04:58.156 "read": true, 00:04:58.156 "write": true, 00:04:58.156 "unmap": true, 00:04:58.156 "flush": true, 00:04:58.156 "reset": true, 00:04:58.156 "nvme_admin": false, 00:04:58.156 "nvme_io": false, 00:04:58.156 "nvme_io_md": false, 00:04:58.156 "write_zeroes": true, 00:04:58.156 "zcopy": true, 00:04:58.156 "get_zone_info": false, 00:04:58.156 "zone_management": false, 00:04:58.156 "zone_append": false, 00:04:58.156 "compare": false, 00:04:58.156 "compare_and_write": false, 00:04:58.156 "abort": true, 00:04:58.156 "seek_hole": false, 00:04:58.156 "seek_data": false, 00:04:58.156 "copy": true, 00:04:58.156 "nvme_iov_md": false 00:04:58.156 }, 00:04:58.156 "memory_domains": [ 00:04:58.156 { 00:04:58.156 "dma_device_id": "system", 00:04:58.156 "dma_device_type": 1 00:04:58.156 }, 00:04:58.156 { 00:04:58.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.156 "dma_device_type": 2 00:04:58.156 } 00:04:58.156 ], 00:04:58.156 "driver_specific": {} 00:04:58.156 } 00:04:58.156 ]' 00:04:58.156 15:59:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:58.156 15:59:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:58.156 15:59:33 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:58.156 15:59:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.156 15:59:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.156 [2024-11-20 15:59:33.946865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:58.156 [2024-11-20 15:59:33.946918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:58.156 [2024-11-20 15:59:33.946935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12cb800 00:04:58.156 [2024-11-20 15:59:33.946944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:58.156 [2024-11-20 15:59:33.948517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:58.156 [2024-11-20 15:59:33.948557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:58.156 Passthru0 00:04:58.156 15:59:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.156 15:59:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:58.156 15:59:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.156 15:59:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.156 15:59:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.156 15:59:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:58.156 { 00:04:58.156 "name": "Malloc0", 00:04:58.156 "aliases": [ 00:04:58.156 "055c0251-28b6-4a47-95fe-34eb8ab49d74" 00:04:58.156 ], 00:04:58.156 "product_name": "Malloc disk", 00:04:58.156 "block_size": 512, 00:04:58.156 "num_blocks": 16384, 00:04:58.156 "uuid": "055c0251-28b6-4a47-95fe-34eb8ab49d74", 00:04:58.156 "assigned_rate_limits": { 00:04:58.156 "rw_ios_per_sec": 0, 00:04:58.156 "rw_mbytes_per_sec": 0, 00:04:58.156 "r_mbytes_per_sec": 0, 00:04:58.156 "w_mbytes_per_sec": 0 00:04:58.156 }, 00:04:58.156 "claimed": true, 00:04:58.156 "claim_type": "exclusive_write", 00:04:58.156 "zoned": false, 00:04:58.156 "supported_io_types": { 00:04:58.156 "read": true, 00:04:58.156 "write": true, 00:04:58.156 "unmap": true, 00:04:58.156 "flush": true, 00:04:58.156 "reset": true, 00:04:58.156 "nvme_admin": false, 00:04:58.156 "nvme_io": false, 00:04:58.156 "nvme_io_md": false, 00:04:58.156 "write_zeroes": true, 00:04:58.156 "zcopy": true, 00:04:58.156 "get_zone_info": false, 00:04:58.156 "zone_management": false, 00:04:58.156 "zone_append": false, 00:04:58.156 "compare": false, 00:04:58.156 "compare_and_write": false, 00:04:58.156 "abort": true, 00:04:58.156 "seek_hole": false, 00:04:58.156 "seek_data": false, 00:04:58.156 "copy": true, 00:04:58.156 "nvme_iov_md": false 00:04:58.156 }, 00:04:58.156 "memory_domains": [ 00:04:58.156 { 00:04:58.156 "dma_device_id": "system", 00:04:58.156 "dma_device_type": 1 00:04:58.156 }, 00:04:58.156 { 00:04:58.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.156 "dma_device_type": 2 00:04:58.156 } 00:04:58.156 ], 00:04:58.156 "driver_specific": {} 00:04:58.156 }, 00:04:58.156 { 00:04:58.156 "name": "Passthru0", 00:04:58.156 "aliases": [ 00:04:58.156 "14bd72bd-5829-5b64-9f16-27278945d32c" 00:04:58.156 ], 00:04:58.156 "product_name": "passthru", 00:04:58.156 "block_size": 512, 00:04:58.156 "num_blocks": 16384, 00:04:58.156 "uuid": "14bd72bd-5829-5b64-9f16-27278945d32c", 00:04:58.156 "assigned_rate_limits": { 00:04:58.156 "rw_ios_per_sec": 0, 00:04:58.156 "rw_mbytes_per_sec": 0, 00:04:58.156 "r_mbytes_per_sec": 0, 00:04:58.156 "w_mbytes_per_sec": 0 00:04:58.156 }, 00:04:58.156 "claimed": false, 00:04:58.156 "zoned": false, 00:04:58.156 "supported_io_types": { 00:04:58.156 "read": true, 00:04:58.156 "write": true, 00:04:58.156 "unmap": true, 00:04:58.156 "flush": true, 00:04:58.156 "reset": true, 00:04:58.156 "nvme_admin": false, 00:04:58.156 "nvme_io": false, 00:04:58.156 "nvme_io_md": false, 00:04:58.156 "write_zeroes": true, 00:04:58.156 "zcopy": true, 00:04:58.156 "get_zone_info": false, 00:04:58.156 "zone_management": false, 00:04:58.156 "zone_append": false, 00:04:58.156 "compare": false, 00:04:58.156 "compare_and_write": false, 00:04:58.156 "abort": true, 00:04:58.156 "seek_hole": false, 00:04:58.156 "seek_data": false, 00:04:58.156 "copy": true, 00:04:58.156 "nvme_iov_md": false 00:04:58.156 }, 00:04:58.156 "memory_domains": [ 00:04:58.156 { 00:04:58.156 "dma_device_id": "system", 00:04:58.156 "dma_device_type": 1 00:04:58.156 }, 00:04:58.156 { 00:04:58.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.156 "dma_device_type": 2 00:04:58.156 } 00:04:58.156 ], 00:04:58.156 "driver_specific": { 00:04:58.156 "passthru": { 00:04:58.156 "name": "Passthru0", 00:04:58.156 "base_bdev_name": "Malloc0" 00:04:58.156 } 00:04:58.156 } 00:04:58.156 } 00:04:58.156 ]' 00:04:58.156 15:59:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:58.156 15:59:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:58.156 15:59:34 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:58.156 15:59:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.156 15:59:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.156 15:59:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.156 15:59:34 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:58.156 15:59:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.156 15:59:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.156 15:59:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.156 15:59:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:58.156 15:59:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.156 15:59:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.156 15:59:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.156 15:59:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:58.156 15:59:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:58.417 15:59:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:58.417 00:04:58.417 real 0m0.306s 00:04:58.417 user 0m0.186s 00:04:58.417 sys 0m0.050s 00:04:58.417 15:59:34 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.417 15:59:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.417 ************************************ 00:04:58.417 END TEST rpc_integrity 00:04:58.417 ************************************ 00:04:58.417 15:59:34 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:58.417 15:59:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.417 15:59:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.417 15:59:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.417 ************************************ 00:04:58.417 START TEST rpc_plugins 00:04:58.417 ************************************ 00:04:58.417 15:59:34 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:58.417 15:59:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:58.417 15:59:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.417 15:59:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.417 15:59:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.417 15:59:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:58.417 15:59:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:58.417 15:59:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.417 15:59:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.417 15:59:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.417 15:59:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:58.417 { 00:04:58.417 "name": "Malloc1", 00:04:58.417 "aliases": [ 00:04:58.417 "3f0c011a-45bf-4819-8787-f95263b7659d" 00:04:58.417 ], 00:04:58.417 "product_name": "Malloc disk", 00:04:58.417 "block_size": 4096, 00:04:58.417 "num_blocks": 256, 00:04:58.417 "uuid": "3f0c011a-45bf-4819-8787-f95263b7659d", 00:04:58.417 "assigned_rate_limits": { 00:04:58.417 "rw_ios_per_sec": 0, 00:04:58.417 "rw_mbytes_per_sec": 0, 00:04:58.417 "r_mbytes_per_sec": 0, 00:04:58.417 "w_mbytes_per_sec": 0 00:04:58.417 }, 00:04:58.417 "claimed": false, 00:04:58.417 "zoned": false, 00:04:58.417 "supported_io_types": { 00:04:58.417 "read": true, 00:04:58.417 "write": true, 00:04:58.417 "unmap": true, 00:04:58.417 "flush": true, 00:04:58.417 "reset": true, 00:04:58.417 "nvme_admin": false, 00:04:58.417 "nvme_io": false, 00:04:58.417 "nvme_io_md": false, 00:04:58.417 "write_zeroes": true, 00:04:58.417 "zcopy": true, 00:04:58.417 "get_zone_info": false, 00:04:58.417 "zone_management": false, 00:04:58.417 "zone_append": false, 00:04:58.417 "compare": false, 00:04:58.417 "compare_and_write": false, 00:04:58.417 "abort": true, 00:04:58.417 "seek_hole": false, 00:04:58.417 "seek_data": false, 00:04:58.417 "copy": true, 00:04:58.417 "nvme_iov_md": false 00:04:58.417 }, 00:04:58.417 "memory_domains": [ 00:04:58.417 { 00:04:58.417 "dma_device_id": "system", 00:04:58.417 "dma_device_type": 1 00:04:58.417 }, 00:04:58.417 { 00:04:58.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.417 "dma_device_type": 2 00:04:58.417 } 00:04:58.417 ], 00:04:58.417 "driver_specific": {} 00:04:58.417 } 00:04:58.417 ]' 00:04:58.417 15:59:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:58.417 15:59:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:58.417 15:59:34 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:58.417 15:59:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.417 15:59:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.417 15:59:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.417 15:59:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:58.417 15:59:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.417 15:59:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.417 15:59:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.417 15:59:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:58.417 15:59:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:58.417 15:59:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:58.417 00:04:58.417 real 0m0.144s 00:04:58.417 user 0m0.082s 00:04:58.417 sys 0m0.027s 00:04:58.417 15:59:34 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.417 15:59:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.417 ************************************ 00:04:58.417 END TEST rpc_plugins 00:04:58.417 ************************************ 00:04:58.679 15:59:34 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:58.679 15:59:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.679 15:59:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.679 15:59:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.679 ************************************ 00:04:58.679 START TEST rpc_trace_cmd_test 00:04:58.679 ************************************ 00:04:58.679 15:59:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:58.679 15:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:58.679 15:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:58.679 15:59:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.679 15:59:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:58.679 15:59:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.679 15:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:58.679 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1040927", 00:04:58.679 "tpoint_group_mask": "0x8", 00:04:58.679 "iscsi_conn": { 00:04:58.679 "mask": "0x2", 00:04:58.679 "tpoint_mask": "0x0" 00:04:58.679 }, 00:04:58.679 "scsi": { 00:04:58.679 "mask": "0x4", 00:04:58.679 "tpoint_mask": "0x0" 00:04:58.679 }, 00:04:58.679 "bdev": { 00:04:58.679 "mask": "0x8", 00:04:58.679 "tpoint_mask": "0xffffffffffffffff" 00:04:58.679 }, 00:04:58.679 "nvmf_rdma": { 00:04:58.679 "mask": "0x10", 00:04:58.679 "tpoint_mask": "0x0" 00:04:58.679 }, 00:04:58.679 "nvmf_tcp": { 00:04:58.679 "mask": "0x20", 00:04:58.679 "tpoint_mask": "0x0" 00:04:58.679 }, 00:04:58.679 "ftl": { 00:04:58.679 "mask": "0x40", 00:04:58.679 "tpoint_mask": "0x0" 00:04:58.679 }, 00:04:58.679 "blobfs": { 00:04:58.679 "mask": "0x80", 00:04:58.679 "tpoint_mask": "0x0" 00:04:58.679 }, 00:04:58.679 "dsa": { 00:04:58.679 "mask": "0x200", 00:04:58.679 "tpoint_mask": "0x0" 00:04:58.679 }, 00:04:58.679 "thread": { 00:04:58.679 "mask": "0x400", 00:04:58.679 "tpoint_mask": "0x0" 00:04:58.679 }, 00:04:58.679 "nvme_pcie": { 00:04:58.679 "mask": "0x800", 00:04:58.679 "tpoint_mask": "0x0" 00:04:58.679 }, 00:04:58.679 "iaa": { 00:04:58.679 "mask": "0x1000", 00:04:58.679 "tpoint_mask": "0x0" 00:04:58.679 }, 00:04:58.679 "nvme_tcp": { 00:04:58.679 "mask": "0x2000", 00:04:58.679 "tpoint_mask": "0x0" 00:04:58.679 }, 00:04:58.679 "bdev_nvme": { 00:04:58.679 "mask": "0x4000", 00:04:58.679 "tpoint_mask": "0x0" 00:04:58.679 }, 00:04:58.679 "sock": { 00:04:58.679 "mask": "0x8000", 00:04:58.679 "tpoint_mask": "0x0" 00:04:58.679 }, 00:04:58.679 "blob": { 00:04:58.679 "mask": "0x10000", 00:04:58.679 "tpoint_mask": "0x0" 00:04:58.679 }, 00:04:58.679 "bdev_raid": { 00:04:58.679 "mask": "0x20000", 00:04:58.679 "tpoint_mask": "0x0" 00:04:58.679 }, 00:04:58.679 "scheduler": { 00:04:58.679 "mask": "0x40000", 00:04:58.679 "tpoint_mask": "0x0" 00:04:58.679 } 00:04:58.679 }' 00:04:58.679 15:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:58.679 15:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:58.679 15:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:58.679 15:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:58.679 15:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:58.679 15:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:58.679 15:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:58.943 15:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:58.943 15:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:58.943 15:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:58.943 00:04:58.943 real 0m0.251s 00:04:58.943 user 0m0.218s 00:04:58.943 sys 0m0.025s 00:04:58.943 15:59:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.943 15:59:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:58.943 ************************************ 00:04:58.943 END TEST rpc_trace_cmd_test 00:04:58.943 ************************************ 00:04:58.943 15:59:34 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:58.943 15:59:34 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:58.943 15:59:34 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:58.943 15:59:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.943 15:59:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.943 15:59:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.943 ************************************ 00:04:58.943 START TEST rpc_daemon_integrity 00:04:58.943 ************************************ 00:04:58.943 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:58.943 15:59:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:58.943 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.943 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.943 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.943 15:59:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:58.943 15:59:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:58.943 15:59:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:58.943 15:59:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:58.943 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.943 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.943 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.943 15:59:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:58.943 15:59:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:58.943 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.943 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.943 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.943 15:59:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:58.943 { 00:04:58.943 "name": "Malloc2", 00:04:58.943 "aliases": [ 00:04:58.943 "a64c49a6-4a63-4d68-8d21-64d4e0f5640a" 00:04:58.943 ], 00:04:58.943 "product_name": "Malloc disk", 00:04:58.943 "block_size": 512, 00:04:58.943 "num_blocks": 16384, 00:04:58.943 "uuid": "a64c49a6-4a63-4d68-8d21-64d4e0f5640a", 00:04:58.943 "assigned_rate_limits": { 00:04:58.943 "rw_ios_per_sec": 0, 00:04:58.943 "rw_mbytes_per_sec": 0, 00:04:58.943 "r_mbytes_per_sec": 0, 00:04:58.943 "w_mbytes_per_sec": 0 00:04:58.943 }, 00:04:58.943 "claimed": false, 00:04:58.943 "zoned": false, 00:04:58.943 "supported_io_types": { 00:04:58.943 "read": true, 00:04:58.943 "write": true, 00:04:58.943 "unmap": true, 00:04:58.943 "flush": true, 00:04:58.943 "reset": true, 00:04:58.943 "nvme_admin": false, 00:04:58.943 "nvme_io": false, 00:04:58.943 "nvme_io_md": false, 00:04:58.943 "write_zeroes": true, 00:04:58.943 "zcopy": true, 00:04:58.943 "get_zone_info": false, 00:04:58.943 "zone_management": false, 00:04:58.943 "zone_append": false, 00:04:58.943 "compare": false, 00:04:58.943 "compare_and_write": false, 00:04:58.943 "abort": true, 00:04:58.943 "seek_hole": false, 00:04:58.943 "seek_data": false, 00:04:58.943 "copy": true, 00:04:58.943 "nvme_iov_md": false 00:04:58.943 }, 00:04:58.943 "memory_domains": [ 00:04:58.943 { 00:04:58.943 "dma_device_id": "system", 00:04:58.943 "dma_device_type": 1 00:04:58.943 }, 00:04:58.943 { 00:04:58.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.943 "dma_device_type": 2 00:04:58.943 } 00:04:58.943 ], 00:04:58.943 "driver_specific": {} 00:04:58.943 } 00:04:58.943 ]' 00:04:58.943 15:59:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.206 [2024-11-20 15:59:34.885421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:59.206 [2024-11-20 15:59:34.885466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:59.206 [2024-11-20 15:59:34.885481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1187fe0 00:04:59.206 [2024-11-20 15:59:34.885489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:59.206 [2024-11-20 15:59:34.886935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:59.206 [2024-11-20 15:59:34.886971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:59.206 Passthru0 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:59.206 { 00:04:59.206 "name": "Malloc2", 00:04:59.206 "aliases": [ 00:04:59.206 "a64c49a6-4a63-4d68-8d21-64d4e0f5640a" 00:04:59.206 ], 00:04:59.206 "product_name": "Malloc disk", 00:04:59.206 "block_size": 512, 00:04:59.206 "num_blocks": 16384, 00:04:59.206 "uuid": "a64c49a6-4a63-4d68-8d21-64d4e0f5640a", 00:04:59.206 "assigned_rate_limits": { 00:04:59.206 "rw_ios_per_sec": 0, 00:04:59.206 "rw_mbytes_per_sec": 0, 00:04:59.206 "r_mbytes_per_sec": 0, 00:04:59.206 "w_mbytes_per_sec": 0 00:04:59.206 }, 00:04:59.206 "claimed": true, 00:04:59.206 "claim_type": "exclusive_write", 00:04:59.206 "zoned": false, 00:04:59.206 "supported_io_types": { 00:04:59.206 "read": true, 00:04:59.206 "write": true, 00:04:59.206 "unmap": true, 00:04:59.206 "flush": true, 00:04:59.206 "reset": true, 00:04:59.206 "nvme_admin": false, 00:04:59.206 "nvme_io": false, 00:04:59.206 "nvme_io_md": false, 00:04:59.206 "write_zeroes": true, 00:04:59.206 "zcopy": true, 00:04:59.206 "get_zone_info": false, 00:04:59.206 "zone_management": false, 00:04:59.206 "zone_append": false, 00:04:59.206 "compare": false, 00:04:59.206 "compare_and_write": false, 00:04:59.206 "abort": true, 00:04:59.206 "seek_hole": false, 00:04:59.206 "seek_data": false, 00:04:59.206 "copy": true, 00:04:59.206 "nvme_iov_md": false 00:04:59.206 }, 00:04:59.206 "memory_domains": [ 00:04:59.206 { 00:04:59.206 "dma_device_id": "system", 00:04:59.206 "dma_device_type": 1 00:04:59.206 }, 00:04:59.206 { 00:04:59.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.206 "dma_device_type": 2 00:04:59.206 } 00:04:59.206 ], 00:04:59.206 "driver_specific": {} 00:04:59.206 }, 00:04:59.206 { 00:04:59.206 "name": "Passthru0", 00:04:59.206 "aliases": [ 00:04:59.206 "083ee08c-a9cd-5ef4-953b-40c5ed9a19fc" 00:04:59.206 ], 00:04:59.206 "product_name": "passthru", 00:04:59.206 "block_size": 512, 00:04:59.206 "num_blocks": 16384, 00:04:59.206 "uuid": "083ee08c-a9cd-5ef4-953b-40c5ed9a19fc", 00:04:59.206 "assigned_rate_limits": { 00:04:59.206 "rw_ios_per_sec": 0, 00:04:59.206 "rw_mbytes_per_sec": 0, 00:04:59.206 "r_mbytes_per_sec": 0, 00:04:59.206 "w_mbytes_per_sec": 0 00:04:59.206 }, 00:04:59.206 "claimed": false, 00:04:59.206 "zoned": false, 00:04:59.206 "supported_io_types": { 00:04:59.206 "read": true, 00:04:59.206 "write": true, 00:04:59.206 "unmap": true, 00:04:59.206 "flush": true, 00:04:59.206 "reset": true, 00:04:59.206 "nvme_admin": false, 00:04:59.206 "nvme_io": false, 00:04:59.206 "nvme_io_md": false, 00:04:59.206 "write_zeroes": true, 00:04:59.206 "zcopy": true, 00:04:59.206 "get_zone_info": false, 00:04:59.206 "zone_management": false, 00:04:59.206 "zone_append": false, 00:04:59.206 "compare": false, 00:04:59.206 "compare_and_write": false, 00:04:59.206 "abort": true, 00:04:59.206 "seek_hole": false, 00:04:59.206 "seek_data": false, 00:04:59.206 "copy": true, 00:04:59.206 "nvme_iov_md": false 00:04:59.206 }, 00:04:59.206 "memory_domains": [ 00:04:59.206 { 00:04:59.206 "dma_device_id": "system", 00:04:59.206 "dma_device_type": 1 00:04:59.206 }, 00:04:59.206 { 00:04:59.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.206 "dma_device_type": 2 00:04:59.206 } 00:04:59.206 ], 00:04:59.206 "driver_specific": { 00:04:59.206 "passthru": { 00:04:59.206 "name": "Passthru0", 00:04:59.206 "base_bdev_name": "Malloc2" 00:04:59.206 } 00:04:59.206 } 00:04:59.206 } 00:04:59.206 ]' 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.206 15:59:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:59.206 15:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:59.206 15:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:59.206 00:04:59.206 real 0m0.304s 00:04:59.206 user 0m0.192s 00:04:59.206 sys 0m0.046s 00:04:59.206 15:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.206 15:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.206 ************************************ 00:04:59.206 END TEST rpc_daemon_integrity 00:04:59.206 ************************************ 00:04:59.206 15:59:35 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:59.206 15:59:35 rpc -- rpc/rpc.sh@84 -- # killprocess 1040927 00:04:59.206 15:59:35 rpc -- common/autotest_common.sh@954 -- # '[' -z 1040927 ']' 00:04:59.206 15:59:35 rpc -- common/autotest_common.sh@958 -- # kill -0 1040927 00:04:59.206 15:59:35 rpc -- common/autotest_common.sh@959 -- # uname 00:04:59.206 15:59:35 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.206 15:59:35 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1040927 00:04:59.467 15:59:35 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.467 15:59:35 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.467 15:59:35 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1040927' 00:04:59.467 killing process with pid 1040927 00:04:59.467 15:59:35 rpc -- common/autotest_common.sh@973 -- # kill 1040927 00:04:59.468 15:59:35 rpc -- common/autotest_common.sh@978 -- # wait 1040927 00:04:59.468 00:04:59.468 real 0m2.720s 00:04:59.468 user 0m3.448s 00:04:59.468 sys 0m0.861s 00:04:59.468 15:59:35 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.468 15:59:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.468 ************************************ 00:04:59.468 END TEST rpc 00:04:59.468 ************************************ 00:04:59.728 15:59:35 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:59.728 15:59:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.728 15:59:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.728 15:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:59.728 ************************************ 00:04:59.728 START TEST skip_rpc 00:04:59.728 ************************************ 00:04:59.729 15:59:35 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:59.729 * Looking for test storage... 00:04:59.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:59.729 15:59:35 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.729 15:59:35 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.729 15:59:35 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:59.729 15:59:35 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:59.729 15:59:35 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.729 15:59:35 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.729 15:59:35 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.729 15:59:35 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.729 15:59:35 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.729 15:59:35 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.729 15:59:35 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.729 15:59:35 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.729 15:59:35 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.729 15:59:35 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.729 15:59:35 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.729 15:59:35 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:59.729 15:59:35 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:59.729 15:59:35 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.729 15:59:35 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.729 15:59:35 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:59.729 15:59:35 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:59.729 15:59:35 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.729 15:59:35 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:59.990 15:59:35 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.990 15:59:35 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:59.990 15:59:35 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:59.990 15:59:35 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.990 15:59:35 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:59.990 15:59:35 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.990 15:59:35 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.990 15:59:35 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.990 15:59:35 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:59.990 15:59:35 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.990 15:59:35 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:59.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.990 --rc genhtml_branch_coverage=1 00:04:59.990 --rc genhtml_function_coverage=1 00:04:59.990 --rc genhtml_legend=1 00:04:59.990 --rc geninfo_all_blocks=1 00:04:59.990 --rc geninfo_unexecuted_blocks=1 00:04:59.990 00:04:59.990 ' 00:04:59.990 15:59:35 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:59.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.990 --rc genhtml_branch_coverage=1 00:04:59.990 --rc genhtml_function_coverage=1 00:04:59.990 --rc genhtml_legend=1 00:04:59.990 --rc geninfo_all_blocks=1 00:04:59.990 --rc geninfo_unexecuted_blocks=1 00:04:59.990 00:04:59.990 ' 00:04:59.990 15:59:35 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:59.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.990 --rc genhtml_branch_coverage=1 00:04:59.990 --rc genhtml_function_coverage=1 00:04:59.990 --rc genhtml_legend=1 00:04:59.990 --rc geninfo_all_blocks=1 00:04:59.990 --rc geninfo_unexecuted_blocks=1 00:04:59.990 00:04:59.990 ' 00:04:59.990 15:59:35 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:59.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.990 --rc genhtml_branch_coverage=1 00:04:59.990 --rc genhtml_function_coverage=1 00:04:59.990 --rc genhtml_legend=1 00:04:59.990 --rc geninfo_all_blocks=1 00:04:59.990 --rc geninfo_unexecuted_blocks=1 00:04:59.990 00:04:59.990 ' 00:04:59.990 15:59:35 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:59.990 15:59:35 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:59.990 15:59:35 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:59.990 15:59:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.990 15:59:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.990 15:59:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.990 ************************************ 00:04:59.990 START TEST skip_rpc 00:04:59.990 ************************************ 00:04:59.990 15:59:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:59.990 15:59:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1041769 00:04:59.990 15:59:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.990 15:59:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:59.990 15:59:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:59.990 [2024-11-20 15:59:35.784220] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:04:59.990 [2024-11-20 15:59:35.784283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041769 ] 00:04:59.990 [2024-11-20 15:59:35.874770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.251 [2024-11-20 15:59:35.926608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1041769 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1041769 ']' 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1041769 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1041769 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1041769' 00:05:05.558 killing process with pid 1041769 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1041769 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1041769 00:05:05.558 00:05:05.558 real 0m5.267s 00:05:05.558 user 0m5.024s 00:05:05.558 sys 0m0.293s 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.558 15:59:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.558 ************************************ 00:05:05.558 END TEST skip_rpc 00:05:05.558 ************************************ 00:05:05.558 15:59:41 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:05.558 15:59:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.558 15:59:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.558 15:59:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.558 ************************************ 00:05:05.558 START TEST skip_rpc_with_json 00:05:05.558 ************************************ 00:05:05.558 15:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:05.558 15:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:05.558 15:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1042811 00:05:05.558 15:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.558 15:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1042811 00:05:05.558 15:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.558 15:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1042811 ']' 00:05:05.558 15:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.558 15:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.558 15:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.558 15:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.558 15:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:05.558 [2024-11-20 15:59:41.124957] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:05:05.558 [2024-11-20 15:59:41.125005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042811 ] 00:05:05.558 [2024-11-20 15:59:41.210462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.558 [2024-11-20 15:59:41.241731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.130 15:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.130 15:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:06.130 15:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:06.130 15:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.130 15:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.130 [2024-11-20 15:59:41.921043] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:06.130 request: 00:05:06.130 { 00:05:06.130 "trtype": "tcp", 00:05:06.130 "method": "nvmf_get_transports", 00:05:06.130 "req_id": 1 00:05:06.130 } 00:05:06.130 Got JSON-RPC error response 00:05:06.130 response: 00:05:06.130 { 00:05:06.130 "code": -19, 00:05:06.130 "message": "No such device" 00:05:06.130 } 00:05:06.130 15:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:06.130 15:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:06.130 15:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.130 15:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.130 [2024-11-20 15:59:41.933140] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:06.130 15:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.130 15:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:06.130 15:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.130 15:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.390 15:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.390 15:59:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:06.390 { 00:05:06.390 "subsystems": [ 00:05:06.390 { 00:05:06.390 "subsystem": "fsdev", 00:05:06.390 "config": [ 00:05:06.390 { 00:05:06.390 "method": "fsdev_set_opts", 00:05:06.390 "params": { 00:05:06.390 "fsdev_io_pool_size": 65535, 00:05:06.390 "fsdev_io_cache_size": 256 00:05:06.390 } 00:05:06.390 } 00:05:06.390 ] 00:05:06.390 }, 00:05:06.390 { 00:05:06.390 "subsystem": "vfio_user_target", 00:05:06.390 "config": null 00:05:06.390 }, 00:05:06.390 { 00:05:06.390 "subsystem": "keyring", 00:05:06.390 "config": [] 00:05:06.390 }, 00:05:06.390 { 00:05:06.390 "subsystem": "iobuf", 00:05:06.390 "config": [ 00:05:06.390 { 00:05:06.390 "method": "iobuf_set_options", 00:05:06.390 "params": { 00:05:06.390 "small_pool_count": 8192, 00:05:06.390 "large_pool_count": 1024, 00:05:06.390 "small_bufsize": 8192, 00:05:06.390 "large_bufsize": 135168, 00:05:06.390 "enable_numa": false 00:05:06.390 } 00:05:06.390 } 00:05:06.390 ] 00:05:06.390 }, 00:05:06.390 { 00:05:06.390 "subsystem": "sock", 00:05:06.390 "config": [ 00:05:06.390 { 00:05:06.390 "method": "sock_set_default_impl", 00:05:06.390 "params": { 00:05:06.390 "impl_name": "posix" 00:05:06.390 } 00:05:06.390 }, 00:05:06.390 { 00:05:06.390 "method": "sock_impl_set_options", 00:05:06.390 "params": { 00:05:06.390 "impl_name": "ssl", 00:05:06.390 "recv_buf_size": 4096, 00:05:06.390 "send_buf_size": 4096, 00:05:06.390 "enable_recv_pipe": true, 00:05:06.391 "enable_quickack": false, 00:05:06.391 "enable_placement_id": 0, 00:05:06.391 "enable_zerocopy_send_server": true, 00:05:06.391 "enable_zerocopy_send_client": false, 00:05:06.391 "zerocopy_threshold": 0, 00:05:06.391 "tls_version": 0, 00:05:06.391 "enable_ktls": false 00:05:06.391 } 00:05:06.391 }, 00:05:06.391 { 00:05:06.391 "method": "sock_impl_set_options", 00:05:06.391 "params": { 00:05:06.391 "impl_name": "posix", 00:05:06.391 "recv_buf_size": 2097152, 00:05:06.391 "send_buf_size": 2097152, 00:05:06.391 "enable_recv_pipe": true, 00:05:06.391 "enable_quickack": false, 00:05:06.391 "enable_placement_id": 0, 00:05:06.391 "enable_zerocopy_send_server": true, 00:05:06.391 "enable_zerocopy_send_client": false, 00:05:06.391 "zerocopy_threshold": 0, 00:05:06.391 "tls_version": 0, 00:05:06.391 "enable_ktls": false 00:05:06.391 } 00:05:06.391 } 00:05:06.391 ] 00:05:06.391 }, 00:05:06.391 { 00:05:06.391 "subsystem": "vmd", 00:05:06.391 "config": [] 00:05:06.391 }, 00:05:06.391 { 00:05:06.391 "subsystem": "accel", 00:05:06.391 "config": [ 00:05:06.391 { 00:05:06.391 "method": "accel_set_options", 00:05:06.391 "params": { 00:05:06.391 "small_cache_size": 128, 00:05:06.391 "large_cache_size": 16, 00:05:06.391 "task_count": 2048, 00:05:06.391 "sequence_count": 2048, 00:05:06.391 "buf_count": 2048 00:05:06.391 } 00:05:06.391 } 00:05:06.391 ] 00:05:06.391 }, 00:05:06.391 { 00:05:06.391 "subsystem": "bdev", 00:05:06.391 "config": [ 00:05:06.391 { 00:05:06.391 "method": "bdev_set_options", 00:05:06.391 "params": { 00:05:06.391 "bdev_io_pool_size": 65535, 00:05:06.391 "bdev_io_cache_size": 256, 00:05:06.391 "bdev_auto_examine": true, 00:05:06.391 "iobuf_small_cache_size": 128, 00:05:06.391 "iobuf_large_cache_size": 16 00:05:06.391 } 00:05:06.391 }, 00:05:06.391 { 00:05:06.391 "method": "bdev_raid_set_options", 00:05:06.391 "params": { 00:05:06.391 "process_window_size_kb": 1024, 00:05:06.391 "process_max_bandwidth_mb_sec": 0 00:05:06.391 } 00:05:06.391 }, 00:05:06.391 { 00:05:06.391 "method": "bdev_iscsi_set_options", 00:05:06.391 "params": { 00:05:06.391 "timeout_sec": 30 00:05:06.391 } 00:05:06.391 }, 00:05:06.391 { 00:05:06.391 "method": "bdev_nvme_set_options", 00:05:06.391 "params": { 00:05:06.391 "action_on_timeout": "none", 00:05:06.391 "timeout_us": 0, 00:05:06.391 "timeout_admin_us": 0, 00:05:06.391 "keep_alive_timeout_ms": 10000, 00:05:06.391 "arbitration_burst": 0, 00:05:06.391 "low_priority_weight": 0, 00:05:06.391 "medium_priority_weight": 0, 00:05:06.391 "high_priority_weight": 0, 00:05:06.391 "nvme_adminq_poll_period_us": 10000, 00:05:06.391 "nvme_ioq_poll_period_us": 0, 00:05:06.391 "io_queue_requests": 0, 00:05:06.391 "delay_cmd_submit": true, 00:05:06.391 "transport_retry_count": 4, 00:05:06.391 "bdev_retry_count": 3, 00:05:06.391 "transport_ack_timeout": 0, 00:05:06.391 "ctrlr_loss_timeout_sec": 0, 00:05:06.391 "reconnect_delay_sec": 0, 00:05:06.391 "fast_io_fail_timeout_sec": 0, 00:05:06.391 "disable_auto_failback": false, 00:05:06.391 "generate_uuids": false, 00:05:06.391 "transport_tos": 0, 00:05:06.391 "nvme_error_stat": false, 00:05:06.391 "rdma_srq_size": 0, 00:05:06.391 "io_path_stat": false, 00:05:06.391 "allow_accel_sequence": false, 00:05:06.391 "rdma_max_cq_size": 0, 00:05:06.391 "rdma_cm_event_timeout_ms": 0, 00:05:06.391 "dhchap_digests": [ 00:05:06.391 "sha256", 00:05:06.391 "sha384", 00:05:06.391 "sha512" 00:05:06.391 ], 00:05:06.391 "dhchap_dhgroups": [ 00:05:06.391 "null", 00:05:06.391 "ffdhe2048", 00:05:06.391 "ffdhe3072", 00:05:06.391 "ffdhe4096", 00:05:06.391 "ffdhe6144", 00:05:06.391 "ffdhe8192" 00:05:06.391 ] 00:05:06.391 } 00:05:06.391 }, 00:05:06.391 { 00:05:06.391 "method": "bdev_nvme_set_hotplug", 00:05:06.391 "params": { 00:05:06.391 "period_us": 100000, 00:05:06.391 "enable": false 00:05:06.391 } 00:05:06.391 }, 00:05:06.391 { 00:05:06.391 "method": "bdev_wait_for_examine" 00:05:06.391 } 00:05:06.391 ] 00:05:06.391 }, 00:05:06.391 { 00:05:06.391 "subsystem": "scsi", 00:05:06.391 "config": null 00:05:06.391 }, 00:05:06.391 { 00:05:06.391 "subsystem": "scheduler", 00:05:06.391 "config": [ 00:05:06.391 { 00:05:06.391 "method": "framework_set_scheduler", 00:05:06.391 "params": { 00:05:06.391 "name": "static" 00:05:06.391 } 00:05:06.391 } 00:05:06.391 ] 00:05:06.391 }, 00:05:06.391 { 00:05:06.391 "subsystem": "vhost_scsi", 00:05:06.391 "config": [] 00:05:06.391 }, 00:05:06.391 { 00:05:06.391 "subsystem": "vhost_blk", 00:05:06.391 "config": [] 00:05:06.391 }, 00:05:06.391 { 00:05:06.391 "subsystem": "ublk", 00:05:06.391 "config": [] 00:05:06.391 }, 00:05:06.391 { 00:05:06.391 "subsystem": "nbd", 00:05:06.391 "config": [] 00:05:06.391 }, 00:05:06.391 { 00:05:06.391 "subsystem": "nvmf", 00:05:06.391 "config": [ 00:05:06.391 { 00:05:06.391 "method": "nvmf_set_config", 00:05:06.391 "params": { 00:05:06.391 "discovery_filter": "match_any", 00:05:06.391 "admin_cmd_passthru": { 00:05:06.391 "identify_ctrlr": false 00:05:06.391 }, 00:05:06.391 "dhchap_digests": [ 00:05:06.391 "sha256", 00:05:06.391 "sha384", 00:05:06.391 "sha512" 00:05:06.391 ], 00:05:06.391 "dhchap_dhgroups": [ 00:05:06.391 "null", 00:05:06.391 "ffdhe2048", 00:05:06.391 "ffdhe3072", 00:05:06.391 "ffdhe4096", 00:05:06.391 "ffdhe6144", 00:05:06.391 "ffdhe8192" 00:05:06.391 ] 00:05:06.391 } 00:05:06.391 }, 00:05:06.391 { 00:05:06.391 "method": "nvmf_set_max_subsystems", 00:05:06.391 "params": { 00:05:06.391 "max_subsystems": 1024 00:05:06.391 } 00:05:06.391 }, 00:05:06.391 { 00:05:06.391 "method": "nvmf_set_crdt", 00:05:06.391 "params": { 00:05:06.391 "crdt1": 0, 00:05:06.391 "crdt2": 0, 00:05:06.391 "crdt3": 0 00:05:06.391 } 00:05:06.391 }, 00:05:06.391 { 00:05:06.391 "method": "nvmf_create_transport", 00:05:06.391 "params": { 00:05:06.391 "trtype": "TCP", 00:05:06.391 "max_queue_depth": 128, 00:05:06.391 "max_io_qpairs_per_ctrlr": 127, 00:05:06.391 "in_capsule_data_size": 4096, 00:05:06.391 "max_io_size": 131072, 00:05:06.391 "io_unit_size": 131072, 00:05:06.391 "max_aq_depth": 128, 00:05:06.391 "num_shared_buffers": 511, 00:05:06.391 "buf_cache_size": 4294967295, 00:05:06.391 "dif_insert_or_strip": false, 00:05:06.391 "zcopy": false, 00:05:06.391 "c2h_success": true, 00:05:06.391 "sock_priority": 0, 00:05:06.391 "abort_timeout_sec": 1, 00:05:06.391 "ack_timeout": 0, 00:05:06.391 "data_wr_pool_size": 0 00:05:06.391 } 00:05:06.391 } 00:05:06.391 ] 00:05:06.391 }, 00:05:06.391 { 00:05:06.391 "subsystem": "iscsi", 00:05:06.391 "config": [ 00:05:06.391 { 00:05:06.391 "method": "iscsi_set_options", 00:05:06.391 "params": { 00:05:06.391 "node_base": "iqn.2016-06.io.spdk", 00:05:06.391 "max_sessions": 128, 00:05:06.391 "max_connections_per_session": 2, 00:05:06.391 "max_queue_depth": 64, 00:05:06.391 "default_time2wait": 2, 00:05:06.391 "default_time2retain": 20, 00:05:06.391 "first_burst_length": 8192, 00:05:06.391 "immediate_data": true, 00:05:06.391 "allow_duplicated_isid": false, 00:05:06.391 "error_recovery_level": 0, 00:05:06.391 "nop_timeout": 60, 00:05:06.391 "nop_in_interval": 30, 00:05:06.391 "disable_chap": false, 00:05:06.391 "require_chap": false, 00:05:06.391 "mutual_chap": false, 00:05:06.391 "chap_group": 0, 00:05:06.391 "max_large_datain_per_connection": 64, 00:05:06.391 "max_r2t_per_connection": 4, 00:05:06.391 "pdu_pool_size": 36864, 00:05:06.391 "immediate_data_pool_size": 16384, 00:05:06.391 "data_out_pool_size": 2048 00:05:06.391 } 00:05:06.391 } 00:05:06.391 ] 00:05:06.391 } 00:05:06.391 ] 00:05:06.391 } 00:05:06.391 15:59:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:06.391 15:59:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1042811 00:05:06.391 15:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1042811 ']' 00:05:06.391 15:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1042811 00:05:06.391 15:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:06.391 15:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.391 15:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1042811 00:05:06.391 15:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.391 15:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.391 15:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1042811' 00:05:06.391 killing process with pid 1042811 00:05:06.391 15:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1042811 00:05:06.391 15:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1042811 00:05:06.652 15:59:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1043156 00:05:06.652 15:59:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:06.652 15:59:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1043156 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1043156 ']' 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1043156 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1043156 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1043156' 00:05:11.941 killing process with pid 1043156 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1043156 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1043156 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:11.941 00:05:11.941 real 0m6.570s 00:05:11.941 user 0m6.482s 00:05:11.941 sys 0m0.565s 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.941 ************************************ 00:05:11.941 END TEST skip_rpc_with_json 00:05:11.941 ************************************ 00:05:11.941 15:59:47 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:11.941 15:59:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.941 15:59:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.941 15:59:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.941 ************************************ 00:05:11.941 START TEST skip_rpc_with_delay 00:05:11.941 ************************************ 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:11.941 [2024-11-20 15:59:47.768710] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:11.941 00:05:11.941 real 0m0.076s 00:05:11.941 user 0m0.051s 00:05:11.941 sys 0m0.025s 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.941 15:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:11.941 ************************************ 00:05:11.941 END TEST skip_rpc_with_delay 00:05:11.941 ************************************ 00:05:11.941 15:59:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:11.941 15:59:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:11.941 15:59:47 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:11.941 15:59:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.941 15:59:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.941 15:59:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.941 ************************************ 00:05:11.941 START TEST exit_on_failed_rpc_init 00:05:11.941 ************************************ 00:05:11.941 15:59:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:11.941 15:59:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1044219 00:05:11.941 15:59:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1044219 00:05:11.941 15:59:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.941 15:59:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1044219 ']' 00:05:11.941 15:59:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.941 15:59:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.941 15:59:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.941 15:59:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.941 15:59:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:12.202 [2024-11-20 15:59:47.923265] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:05:12.202 [2024-11-20 15:59:47.923313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044219 ] 00:05:12.202 [2024-11-20 15:59:48.005770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.202 [2024-11-20 15:59:48.036635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:13.144 [2024-11-20 15:59:48.775840] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:05:13.144 [2024-11-20 15:59:48.775894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044405 ] 00:05:13.144 [2024-11-20 15:59:48.864105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.144 [2024-11-20 15:59:48.899972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.144 [2024-11-20 15:59:48.900025] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:13.144 [2024-11-20 15:59:48.900035] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:13.144 [2024-11-20 15:59:48.900042] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1044219 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1044219 ']' 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1044219 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.144 15:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1044219 00:05:13.144 15:59:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.144 15:59:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.144 15:59:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1044219' 00:05:13.144 killing process with pid 1044219 00:05:13.144 15:59:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1044219 00:05:13.144 15:59:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1044219 00:05:13.406 00:05:13.406 real 0m1.323s 00:05:13.406 user 0m1.580s 00:05:13.406 sys 0m0.362s 00:05:13.406 15:59:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.406 15:59:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:13.406 ************************************ 00:05:13.406 END TEST exit_on_failed_rpc_init 00:05:13.406 ************************************ 00:05:13.406 15:59:49 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:13.406 00:05:13.406 real 0m13.761s 00:05:13.406 user 0m13.383s 00:05:13.406 sys 0m1.554s 00:05:13.406 15:59:49 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.406 15:59:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.406 ************************************ 00:05:13.406 END TEST skip_rpc 00:05:13.406 ************************************ 00:05:13.406 15:59:49 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:13.406 15:59:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.406 15:59:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.406 15:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:13.406 ************************************ 00:05:13.406 START TEST rpc_client 00:05:13.406 ************************************ 00:05:13.406 15:59:49 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:13.667 * Looking for test storage... 00:05:13.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:13.667 15:59:49 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.667 15:59:49 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.667 15:59:49 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.667 15:59:49 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.667 15:59:49 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:13.667 15:59:49 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.667 15:59:49 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.667 --rc genhtml_branch_coverage=1 00:05:13.667 --rc genhtml_function_coverage=1 00:05:13.667 --rc genhtml_legend=1 00:05:13.667 --rc geninfo_all_blocks=1 00:05:13.667 --rc geninfo_unexecuted_blocks=1 00:05:13.667 00:05:13.667 ' 00:05:13.667 15:59:49 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.667 --rc genhtml_branch_coverage=1 00:05:13.667 --rc genhtml_function_coverage=1 00:05:13.667 --rc genhtml_legend=1 00:05:13.667 --rc geninfo_all_blocks=1 00:05:13.667 --rc geninfo_unexecuted_blocks=1 00:05:13.667 00:05:13.667 ' 00:05:13.667 15:59:49 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.667 --rc genhtml_branch_coverage=1 00:05:13.667 --rc genhtml_function_coverage=1 00:05:13.667 --rc genhtml_legend=1 00:05:13.667 --rc geninfo_all_blocks=1 00:05:13.667 --rc geninfo_unexecuted_blocks=1 00:05:13.667 00:05:13.667 ' 00:05:13.667 15:59:49 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.667 --rc genhtml_branch_coverage=1 00:05:13.667 --rc genhtml_function_coverage=1 00:05:13.667 --rc genhtml_legend=1 00:05:13.667 --rc geninfo_all_blocks=1 00:05:13.667 --rc geninfo_unexecuted_blocks=1 00:05:13.667 00:05:13.667 ' 00:05:13.667 15:59:49 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:13.667 OK 00:05:13.667 15:59:49 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:13.667 00:05:13.667 real 0m0.219s 00:05:13.667 user 0m0.132s 00:05:13.667 sys 0m0.102s 00:05:13.667 15:59:49 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.667 15:59:49 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:13.667 ************************************ 00:05:13.667 END TEST rpc_client 00:05:13.667 ************************************ 00:05:13.667 15:59:49 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:13.667 15:59:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.667 15:59:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.667 15:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:13.928 ************************************ 00:05:13.928 START TEST json_config 00:05:13.928 ************************************ 00:05:13.928 15:59:49 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:13.928 15:59:49 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.928 15:59:49 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.928 15:59:49 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.928 15:59:49 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.928 15:59:49 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.928 15:59:49 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.928 15:59:49 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.928 15:59:49 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.928 15:59:49 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.928 15:59:49 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.928 15:59:49 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.928 15:59:49 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.928 15:59:49 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.928 15:59:49 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.928 15:59:49 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.928 15:59:49 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:13.928 15:59:49 json_config -- scripts/common.sh@345 -- # : 1 00:05:13.928 15:59:49 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.928 15:59:49 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.928 15:59:49 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:13.928 15:59:49 json_config -- scripts/common.sh@353 -- # local d=1 00:05:13.928 15:59:49 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.928 15:59:49 json_config -- scripts/common.sh@355 -- # echo 1 00:05:13.928 15:59:49 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.928 15:59:49 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:13.928 15:59:49 json_config -- scripts/common.sh@353 -- # local d=2 00:05:13.929 15:59:49 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.929 15:59:49 json_config -- scripts/common.sh@355 -- # echo 2 00:05:13.929 15:59:49 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.929 15:59:49 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.929 15:59:49 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.929 15:59:49 json_config -- scripts/common.sh@368 -- # return 0 00:05:13.929 15:59:49 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.929 15:59:49 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.929 --rc genhtml_branch_coverage=1 00:05:13.929 --rc genhtml_function_coverage=1 00:05:13.929 --rc genhtml_legend=1 00:05:13.929 --rc geninfo_all_blocks=1 00:05:13.929 --rc geninfo_unexecuted_blocks=1 00:05:13.929 00:05:13.929 ' 00:05:13.929 15:59:49 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.929 --rc genhtml_branch_coverage=1 00:05:13.929 --rc genhtml_function_coverage=1 00:05:13.929 --rc genhtml_legend=1 00:05:13.929 --rc geninfo_all_blocks=1 00:05:13.929 --rc geninfo_unexecuted_blocks=1 00:05:13.929 00:05:13.929 ' 00:05:13.929 15:59:49 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.929 --rc genhtml_branch_coverage=1 00:05:13.929 --rc genhtml_function_coverage=1 00:05:13.929 --rc genhtml_legend=1 00:05:13.929 --rc geninfo_all_blocks=1 00:05:13.929 --rc geninfo_unexecuted_blocks=1 00:05:13.929 00:05:13.929 ' 00:05:13.929 15:59:49 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.929 --rc genhtml_branch_coverage=1 00:05:13.929 --rc genhtml_function_coverage=1 00:05:13.929 --rc genhtml_legend=1 00:05:13.929 --rc geninfo_all_blocks=1 00:05:13.929 --rc geninfo_unexecuted_blocks=1 00:05:13.929 00:05:13.929 ' 00:05:13.929 15:59:49 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:13.929 15:59:49 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:13.929 15:59:49 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.929 15:59:49 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.929 15:59:49 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.929 15:59:49 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.929 15:59:49 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.929 15:59:49 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.929 15:59:49 json_config -- paths/export.sh@5 -- # export PATH 00:05:13.929 15:59:49 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@51 -- # : 0 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:13.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:13.929 15:59:49 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:13.929 15:59:49 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:13.929 15:59:49 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:13.929 15:59:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:13.929 15:59:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:13.929 15:59:49 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:13.929 15:59:49 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:13.929 15:59:49 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:13.929 15:59:49 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:13.929 15:59:49 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:13.929 15:59:49 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:13.930 15:59:49 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:13.930 15:59:49 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:13.930 15:59:49 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:13.930 15:59:49 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:13.930 15:59:49 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:13.930 15:59:49 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:13.930 INFO: JSON configuration test init 00:05:13.930 15:59:49 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:13.930 15:59:49 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:13.930 15:59:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:13.930 15:59:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.930 15:59:49 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:13.930 15:59:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:13.930 15:59:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.930 15:59:49 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:13.930 15:59:49 json_config -- json_config/common.sh@9 -- # local app=target 00:05:13.930 15:59:49 json_config -- json_config/common.sh@10 -- # shift 00:05:13.930 15:59:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:13.930 15:59:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:13.930 15:59:49 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:13.930 15:59:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.930 15:59:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.930 15:59:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1044694 00:05:13.930 15:59:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:13.930 Waiting for target to run... 00:05:13.930 15:59:49 json_config -- json_config/common.sh@25 -- # waitforlisten 1044694 /var/tmp/spdk_tgt.sock 00:05:13.930 15:59:49 json_config -- common/autotest_common.sh@835 -- # '[' -z 1044694 ']' 00:05:13.930 15:59:49 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:13.930 15:59:49 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.930 15:59:49 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:13.930 15:59:49 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:13.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:13.930 15:59:49 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.930 15:59:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.191 [2024-11-20 15:59:49.888225] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:05:14.191 [2024-11-20 15:59:49.888295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044694 ] 00:05:14.452 [2024-11-20 15:59:50.177228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.452 [2024-11-20 15:59:50.203879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.024 15:59:50 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.024 15:59:50 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:15.024 15:59:50 json_config -- json_config/common.sh@26 -- # echo '' 00:05:15.024 00:05:15.024 15:59:50 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:15.024 15:59:50 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:15.024 15:59:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.024 15:59:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.024 15:59:50 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:15.024 15:59:50 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:15.024 15:59:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:15.024 15:59:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.024 15:59:50 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:15.024 15:59:50 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:15.024 15:59:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:15.594 15:59:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.594 15:59:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:15.594 15:59:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@54 -- # sort 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:15.594 15:59:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:15.594 15:59:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:15.594 15:59:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.594 15:59:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:15.594 15:59:51 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:15.594 15:59:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:15.854 MallocForNvmf0 00:05:15.854 15:59:51 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:15.855 15:59:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:16.115 MallocForNvmf1 00:05:16.115 15:59:51 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:16.115 15:59:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:16.115 [2024-11-20 15:59:52.029307] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:16.376 15:59:52 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:16.376 15:59:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:16.376 15:59:52 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:16.376 15:59:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:16.637 15:59:52 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:16.637 15:59:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:16.898 15:59:52 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:16.898 15:59:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:16.898 [2024-11-20 15:59:52.747511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:16.898 15:59:52 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:16.898 15:59:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.898 15:59:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.898 15:59:52 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:16.898 15:59:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.898 15:59:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.159 15:59:52 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:17.159 15:59:52 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:17.159 15:59:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:17.159 MallocBdevForConfigChangeCheck 00:05:17.159 15:59:53 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:17.159 15:59:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:17.159 15:59:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.159 15:59:53 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:17.159 15:59:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:17.732 15:59:53 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:17.732 INFO: shutting down applications... 00:05:17.732 15:59:53 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:17.732 15:59:53 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:17.732 15:59:53 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:17.732 15:59:53 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:17.993 Calling clear_iscsi_subsystem 00:05:17.993 Calling clear_nvmf_subsystem 00:05:17.993 Calling clear_nbd_subsystem 00:05:17.993 Calling clear_ublk_subsystem 00:05:17.993 Calling clear_vhost_blk_subsystem 00:05:17.993 Calling clear_vhost_scsi_subsystem 00:05:17.993 Calling clear_bdev_subsystem 00:05:17.993 15:59:53 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:17.993 15:59:53 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:17.993 15:59:53 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:17.993 15:59:53 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:17.993 15:59:53 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:17.993 15:59:53 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:18.255 15:59:54 json_config -- json_config/json_config.sh@352 -- # break 00:05:18.255 15:59:54 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:18.255 15:59:54 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:18.255 15:59:54 json_config -- json_config/common.sh@31 -- # local app=target 00:05:18.255 15:59:54 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:18.255 15:59:54 json_config -- json_config/common.sh@35 -- # [[ -n 1044694 ]] 00:05:18.255 15:59:54 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1044694 00:05:18.255 15:59:54 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:18.255 15:59:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.255 15:59:54 json_config -- json_config/common.sh@41 -- # kill -0 1044694 00:05:18.255 15:59:54 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:18.828 15:59:54 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:18.828 15:59:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.828 15:59:54 json_config -- json_config/common.sh@41 -- # kill -0 1044694 00:05:18.828 15:59:54 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:18.828 15:59:54 json_config -- json_config/common.sh@43 -- # break 00:05:18.828 15:59:54 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:18.828 15:59:54 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:18.828 SPDK target shutdown done 00:05:18.828 15:59:54 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:18.828 INFO: relaunching applications... 00:05:18.828 15:59:54 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.828 15:59:54 json_config -- json_config/common.sh@9 -- # local app=target 00:05:18.828 15:59:54 json_config -- json_config/common.sh@10 -- # shift 00:05:18.828 15:59:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:18.828 15:59:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:18.828 15:59:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:18.828 15:59:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.828 15:59:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.828 15:59:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1045832 00:05:18.828 15:59:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:18.828 Waiting for target to run... 00:05:18.828 15:59:54 json_config -- json_config/common.sh@25 -- # waitforlisten 1045832 /var/tmp/spdk_tgt.sock 00:05:18.828 15:59:54 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.828 15:59:54 json_config -- common/autotest_common.sh@835 -- # '[' -z 1045832 ']' 00:05:18.828 15:59:54 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:18.828 15:59:54 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.828 15:59:54 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:18.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:18.828 15:59:54 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.828 15:59:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.828 [2024-11-20 15:59:54.722433] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:05:18.828 [2024-11-20 15:59:54.722500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1045832 ] 00:05:19.090 [2024-11-20 15:59:54.938880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.090 [2024-11-20 15:59:54.961933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.661 [2024-11-20 15:59:55.460646] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:19.661 [2024-11-20 15:59:55.493017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:19.661 15:59:55 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.661 15:59:55 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:19.661 15:59:55 json_config -- json_config/common.sh@26 -- # echo '' 00:05:19.661 00:05:19.661 15:59:55 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:19.661 15:59:55 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:19.661 INFO: Checking if target configuration is the same... 00:05:19.661 15:59:55 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.661 15:59:55 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:19.661 15:59:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.661 + '[' 2 -ne 2 ']' 00:05:19.661 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:19.661 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:19.661 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:19.661 +++ basename /dev/fd/62 00:05:19.661 ++ mktemp /tmp/62.XXX 00:05:19.661 + tmp_file_1=/tmp/62.UYc 00:05:19.661 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.661 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:19.661 + tmp_file_2=/tmp/spdk_tgt_config.json.KMM 00:05:19.661 + ret=0 00:05:19.661 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.922 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:20.182 + diff -u /tmp/62.UYc /tmp/spdk_tgt_config.json.KMM 00:05:20.182 + echo 'INFO: JSON config files are the same' 00:05:20.182 INFO: JSON config files are the same 00:05:20.182 + rm /tmp/62.UYc /tmp/spdk_tgt_config.json.KMM 00:05:20.182 + exit 0 00:05:20.182 15:59:55 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:20.182 15:59:55 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:20.182 INFO: changing configuration and checking if this can be detected... 00:05:20.182 15:59:55 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:20.182 15:59:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:20.182 15:59:56 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.182 15:59:56 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:20.182 15:59:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.182 + '[' 2 -ne 2 ']' 00:05:20.182 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:20.182 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:20.182 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:20.182 +++ basename /dev/fd/62 00:05:20.182 ++ mktemp /tmp/62.XXX 00:05:20.182 + tmp_file_1=/tmp/62.RNo 00:05:20.182 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.182 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:20.182 + tmp_file_2=/tmp/spdk_tgt_config.json.5EK 00:05:20.182 + ret=0 00:05:20.182 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:20.753 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:20.754 + diff -u /tmp/62.RNo /tmp/spdk_tgt_config.json.5EK 00:05:20.754 + ret=1 00:05:20.754 + echo '=== Start of file: /tmp/62.RNo ===' 00:05:20.754 + cat /tmp/62.RNo 00:05:20.754 + echo '=== End of file: /tmp/62.RNo ===' 00:05:20.754 + echo '' 00:05:20.754 + echo '=== Start of file: /tmp/spdk_tgt_config.json.5EK ===' 00:05:20.754 + cat /tmp/spdk_tgt_config.json.5EK 00:05:20.754 + echo '=== End of file: /tmp/spdk_tgt_config.json.5EK ===' 00:05:20.754 + echo '' 00:05:20.754 + rm /tmp/62.RNo /tmp/spdk_tgt_config.json.5EK 00:05:20.754 + exit 1 00:05:20.754 15:59:56 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:20.754 INFO: configuration change detected. 00:05:20.754 15:59:56 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:20.754 15:59:56 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:20.754 15:59:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.754 15:59:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.754 15:59:56 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:20.754 15:59:56 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:20.754 15:59:56 json_config -- json_config/json_config.sh@324 -- # [[ -n 1045832 ]] 00:05:20.754 15:59:56 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:20.754 15:59:56 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:20.754 15:59:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.754 15:59:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.754 15:59:56 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:20.754 15:59:56 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:20.754 15:59:56 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:20.754 15:59:56 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:20.754 15:59:56 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:20.754 15:59:56 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:20.754 15:59:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:20.754 15:59:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.754 15:59:56 json_config -- json_config/json_config.sh@330 -- # killprocess 1045832 00:05:20.754 15:59:56 json_config -- common/autotest_common.sh@954 -- # '[' -z 1045832 ']' 00:05:20.754 15:59:56 json_config -- common/autotest_common.sh@958 -- # kill -0 1045832 00:05:20.754 15:59:56 json_config -- common/autotest_common.sh@959 -- # uname 00:05:20.754 15:59:56 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.754 15:59:56 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1045832 00:05:20.754 15:59:56 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.754 15:59:56 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.754 15:59:56 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1045832' 00:05:20.754 killing process with pid 1045832 00:05:20.754 15:59:56 json_config -- common/autotest_common.sh@973 -- # kill 1045832 00:05:20.754 15:59:56 json_config -- common/autotest_common.sh@978 -- # wait 1045832 00:05:21.014 15:59:56 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.014 15:59:56 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:21.014 15:59:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:21.014 15:59:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.014 15:59:56 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:21.014 15:59:56 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:21.014 INFO: Success 00:05:21.014 00:05:21.014 real 0m7.293s 00:05:21.014 user 0m8.962s 00:05:21.014 sys 0m1.844s 00:05:21.014 15:59:56 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.014 15:59:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.014 ************************************ 00:05:21.014 END TEST json_config 00:05:21.014 ************************************ 00:05:21.014 15:59:56 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:21.014 15:59:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.014 15:59:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.014 15:59:56 -- common/autotest_common.sh@10 -- # set +x 00:05:21.276 ************************************ 00:05:21.276 START TEST json_config_extra_key 00:05:21.276 ************************************ 00:05:21.276 15:59:56 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:21.276 15:59:57 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.276 15:59:57 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.276 15:59:57 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.276 15:59:57 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.276 15:59:57 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:21.276 15:59:57 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.276 15:59:57 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.276 --rc genhtml_branch_coverage=1 00:05:21.276 --rc genhtml_function_coverage=1 00:05:21.276 --rc genhtml_legend=1 00:05:21.276 --rc geninfo_all_blocks=1 00:05:21.276 --rc geninfo_unexecuted_blocks=1 00:05:21.276 00:05:21.276 ' 00:05:21.276 15:59:57 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.276 --rc genhtml_branch_coverage=1 00:05:21.276 --rc genhtml_function_coverage=1 00:05:21.276 --rc genhtml_legend=1 00:05:21.276 --rc geninfo_all_blocks=1 00:05:21.276 --rc geninfo_unexecuted_blocks=1 00:05:21.276 00:05:21.276 ' 00:05:21.276 15:59:57 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.276 --rc genhtml_branch_coverage=1 00:05:21.276 --rc genhtml_function_coverage=1 00:05:21.276 --rc genhtml_legend=1 00:05:21.276 --rc geninfo_all_blocks=1 00:05:21.276 --rc geninfo_unexecuted_blocks=1 00:05:21.276 00:05:21.276 ' 00:05:21.276 15:59:57 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.276 --rc genhtml_branch_coverage=1 00:05:21.276 --rc genhtml_function_coverage=1 00:05:21.276 --rc genhtml_legend=1 00:05:21.276 --rc geninfo_all_blocks=1 00:05:21.277 --rc geninfo_unexecuted_blocks=1 00:05:21.277 00:05:21.277 ' 00:05:21.277 15:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:21.277 15:59:57 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:21.277 15:59:57 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.277 15:59:57 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.277 15:59:57 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.277 15:59:57 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.277 15:59:57 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.277 15:59:57 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.277 15:59:57 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:21.277 15:59:57 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:21.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:21.277 15:59:57 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:21.277 15:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:21.277 15:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:21.277 15:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:21.277 15:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:21.277 15:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:21.277 15:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:21.277 15:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:21.277 15:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:21.277 15:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:21.277 15:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:21.277 15:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:21.277 INFO: launching applications... 00:05:21.277 15:59:57 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:21.277 15:59:57 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:21.277 15:59:57 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:21.277 15:59:57 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:21.277 15:59:57 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:21.277 15:59:57 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:21.277 15:59:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.277 15:59:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.277 15:59:57 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1046483 00:05:21.277 15:59:57 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:21.277 Waiting for target to run... 00:05:21.277 15:59:57 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1046483 /var/tmp/spdk_tgt.sock 00:05:21.277 15:59:57 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1046483 ']' 00:05:21.277 15:59:57 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:21.277 15:59:57 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:21.277 15:59:57 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.277 15:59:57 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:21.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:21.277 15:59:57 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.277 15:59:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:21.539 [2024-11-20 15:59:57.248696] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:05:21.539 [2024-11-20 15:59:57.248775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1046483 ] 00:05:21.799 [2024-11-20 15:59:57.548778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.799 [2024-11-20 15:59:57.579167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.369 15:59:58 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.369 15:59:58 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:22.369 15:59:58 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:22.369 00:05:22.369 15:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:22.369 INFO: shutting down applications... 00:05:22.369 15:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:22.369 15:59:58 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:22.369 15:59:58 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:22.369 15:59:58 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1046483 ]] 00:05:22.369 15:59:58 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1046483 00:05:22.369 15:59:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:22.369 15:59:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.369 15:59:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1046483 00:05:22.369 15:59:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:22.630 15:59:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:22.630 15:59:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.630 15:59:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1046483 00:05:22.631 15:59:58 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:22.631 15:59:58 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:22.631 15:59:58 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:22.631 15:59:58 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:22.631 SPDK target shutdown done 00:05:22.631 15:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:22.631 Success 00:05:22.631 00:05:22.631 real 0m1.571s 00:05:22.631 user 0m1.159s 00:05:22.631 sys 0m0.428s 00:05:22.631 15:59:58 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.631 15:59:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:22.631 ************************************ 00:05:22.631 END TEST json_config_extra_key 00:05:22.631 ************************************ 00:05:22.892 15:59:58 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:22.892 15:59:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.892 15:59:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.892 15:59:58 -- common/autotest_common.sh@10 -- # set +x 00:05:22.892 ************************************ 00:05:22.892 START TEST alias_rpc 00:05:22.892 ************************************ 00:05:22.892 15:59:58 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:22.892 * Looking for test storage... 00:05:22.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:22.892 15:59:58 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:22.892 15:59:58 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:22.892 15:59:58 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:22.892 15:59:58 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:22.892 15:59:58 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.892 15:59:58 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.892 15:59:58 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.892 15:59:58 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.892 15:59:58 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.892 15:59:58 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.892 15:59:58 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.892 15:59:58 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.892 15:59:58 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.892 15:59:58 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.892 15:59:58 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.892 15:59:58 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:22.892 15:59:58 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:22.892 15:59:58 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.892 15:59:58 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.892 15:59:58 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:22.892 15:59:58 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:22.892 15:59:58 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.892 15:59:58 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:23.153 15:59:58 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.153 15:59:58 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:23.153 15:59:58 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:23.153 15:59:58 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.153 15:59:58 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:23.153 15:59:58 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.153 15:59:58 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.153 15:59:58 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.153 15:59:58 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:23.153 15:59:58 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.153 15:59:58 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.153 --rc genhtml_branch_coverage=1 00:05:23.153 --rc genhtml_function_coverage=1 00:05:23.153 --rc genhtml_legend=1 00:05:23.153 --rc geninfo_all_blocks=1 00:05:23.153 --rc geninfo_unexecuted_blocks=1 00:05:23.153 00:05:23.153 ' 00:05:23.153 15:59:58 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.153 --rc genhtml_branch_coverage=1 00:05:23.153 --rc genhtml_function_coverage=1 00:05:23.153 --rc genhtml_legend=1 00:05:23.153 --rc geninfo_all_blocks=1 00:05:23.153 --rc geninfo_unexecuted_blocks=1 00:05:23.153 00:05:23.153 ' 00:05:23.153 15:59:58 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.153 --rc genhtml_branch_coverage=1 00:05:23.153 --rc genhtml_function_coverage=1 00:05:23.153 --rc genhtml_legend=1 00:05:23.153 --rc geninfo_all_blocks=1 00:05:23.153 --rc geninfo_unexecuted_blocks=1 00:05:23.153 00:05:23.153 ' 00:05:23.153 15:59:58 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.153 --rc genhtml_branch_coverage=1 00:05:23.153 --rc genhtml_function_coverage=1 00:05:23.153 --rc genhtml_legend=1 00:05:23.153 --rc geninfo_all_blocks=1 00:05:23.153 --rc geninfo_unexecuted_blocks=1 00:05:23.153 00:05:23.153 ' 00:05:23.153 15:59:58 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:23.153 15:59:58 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1046842 00:05:23.153 15:59:58 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1046842 00:05:23.153 15:59:58 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1046842 ']' 00:05:23.153 15:59:58 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.153 15:59:58 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.153 15:59:58 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.153 15:59:58 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.153 15:59:58 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.153 15:59:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.153 [2024-11-20 15:59:58.897489] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:05:23.153 [2024-11-20 15:59:58.897562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1046842 ] 00:05:23.153 [2024-11-20 15:59:58.988515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.153 [2024-11-20 15:59:59.028441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.093 15:59:59 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.093 15:59:59 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:24.093 15:59:59 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:24.093 15:59:59 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1046842 00:05:24.093 15:59:59 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1046842 ']' 00:05:24.093 15:59:59 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1046842 00:05:24.093 15:59:59 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:24.093 15:59:59 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.093 15:59:59 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1046842 00:05:24.093 15:59:59 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.093 15:59:59 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.093 15:59:59 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1046842' 00:05:24.093 killing process with pid 1046842 00:05:24.093 15:59:59 alias_rpc -- common/autotest_common.sh@973 -- # kill 1046842 00:05:24.093 15:59:59 alias_rpc -- common/autotest_common.sh@978 -- # wait 1046842 00:05:24.353 00:05:24.353 real 0m1.544s 00:05:24.353 user 0m1.692s 00:05:24.353 sys 0m0.449s 00:05:24.353 16:00:00 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.353 16:00:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.353 ************************************ 00:05:24.353 END TEST alias_rpc 00:05:24.353 ************************************ 00:05:24.353 16:00:00 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:24.353 16:00:00 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:24.353 16:00:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.353 16:00:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.353 16:00:00 -- common/autotest_common.sh@10 -- # set +x 00:05:24.353 ************************************ 00:05:24.353 START TEST spdkcli_tcp 00:05:24.353 ************************************ 00:05:24.353 16:00:00 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:24.615 * Looking for test storage... 00:05:24.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:24.615 16:00:00 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:24.615 16:00:00 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:24.615 16:00:00 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:24.615 16:00:00 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.615 16:00:00 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:24.615 16:00:00 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.615 16:00:00 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:24.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.615 --rc genhtml_branch_coverage=1 00:05:24.615 --rc genhtml_function_coverage=1 00:05:24.615 --rc genhtml_legend=1 00:05:24.615 --rc geninfo_all_blocks=1 00:05:24.615 --rc geninfo_unexecuted_blocks=1 00:05:24.615 00:05:24.615 ' 00:05:24.615 16:00:00 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:24.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.615 --rc genhtml_branch_coverage=1 00:05:24.615 --rc genhtml_function_coverage=1 00:05:24.615 --rc genhtml_legend=1 00:05:24.615 --rc geninfo_all_blocks=1 00:05:24.615 --rc geninfo_unexecuted_blocks=1 00:05:24.615 00:05:24.615 ' 00:05:24.615 16:00:00 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:24.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.615 --rc genhtml_branch_coverage=1 00:05:24.615 --rc genhtml_function_coverage=1 00:05:24.615 --rc genhtml_legend=1 00:05:24.615 --rc geninfo_all_blocks=1 00:05:24.615 --rc geninfo_unexecuted_blocks=1 00:05:24.615 00:05:24.615 ' 00:05:24.615 16:00:00 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:24.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.615 --rc genhtml_branch_coverage=1 00:05:24.615 --rc genhtml_function_coverage=1 00:05:24.615 --rc genhtml_legend=1 00:05:24.615 --rc geninfo_all_blocks=1 00:05:24.615 --rc geninfo_unexecuted_blocks=1 00:05:24.615 00:05:24.615 ' 00:05:24.615 16:00:00 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:24.615 16:00:00 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:24.615 16:00:00 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:24.615 16:00:00 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:24.615 16:00:00 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:24.615 16:00:00 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:24.615 16:00:00 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:24.615 16:00:00 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.615 16:00:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.615 16:00:00 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1047210 00:05:24.615 16:00:00 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1047210 00:05:24.615 16:00:00 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:24.615 16:00:00 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1047210 ']' 00:05:24.615 16:00:00 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.615 16:00:00 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.615 16:00:00 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.615 16:00:00 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.615 16:00:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.616 [2024-11-20 16:00:00.519384] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:05:24.616 [2024-11-20 16:00:00.519459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047210 ] 00:05:24.876 [2024-11-20 16:00:00.607977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.876 [2024-11-20 16:00:00.652217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.876 [2024-11-20 16:00:00.652237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.446 16:00:01 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.446 16:00:01 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:25.446 16:00:01 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:25.446 16:00:01 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1047491 00:05:25.446 16:00:01 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:25.707 [ 00:05:25.707 "bdev_malloc_delete", 00:05:25.707 "bdev_malloc_create", 00:05:25.707 "bdev_null_resize", 00:05:25.707 "bdev_null_delete", 00:05:25.707 "bdev_null_create", 00:05:25.707 "bdev_nvme_cuse_unregister", 00:05:25.707 "bdev_nvme_cuse_register", 00:05:25.707 "bdev_opal_new_user", 00:05:25.707 "bdev_opal_set_lock_state", 00:05:25.707 "bdev_opal_delete", 00:05:25.707 "bdev_opal_get_info", 00:05:25.707 "bdev_opal_create", 00:05:25.707 "bdev_nvme_opal_revert", 00:05:25.707 "bdev_nvme_opal_init", 00:05:25.707 "bdev_nvme_send_cmd", 00:05:25.707 "bdev_nvme_set_keys", 00:05:25.707 "bdev_nvme_get_path_iostat", 00:05:25.707 "bdev_nvme_get_mdns_discovery_info", 00:05:25.707 "bdev_nvme_stop_mdns_discovery", 00:05:25.707 "bdev_nvme_start_mdns_discovery", 00:05:25.707 "bdev_nvme_set_multipath_policy", 00:05:25.707 "bdev_nvme_set_preferred_path", 00:05:25.707 "bdev_nvme_get_io_paths", 00:05:25.707 "bdev_nvme_remove_error_injection", 00:05:25.707 "bdev_nvme_add_error_injection", 00:05:25.707 "bdev_nvme_get_discovery_info", 00:05:25.707 "bdev_nvme_stop_discovery", 00:05:25.707 "bdev_nvme_start_discovery", 00:05:25.707 "bdev_nvme_get_controller_health_info", 00:05:25.707 "bdev_nvme_disable_controller", 00:05:25.707 "bdev_nvme_enable_controller", 00:05:25.707 "bdev_nvme_reset_controller", 00:05:25.707 "bdev_nvme_get_transport_statistics", 00:05:25.707 "bdev_nvme_apply_firmware", 00:05:25.707 "bdev_nvme_detach_controller", 00:05:25.707 "bdev_nvme_get_controllers", 00:05:25.707 "bdev_nvme_attach_controller", 00:05:25.707 "bdev_nvme_set_hotplug", 00:05:25.707 "bdev_nvme_set_options", 00:05:25.707 "bdev_passthru_delete", 00:05:25.707 "bdev_passthru_create", 00:05:25.707 "bdev_lvol_set_parent_bdev", 00:05:25.707 "bdev_lvol_set_parent", 00:05:25.707 "bdev_lvol_check_shallow_copy", 00:05:25.707 "bdev_lvol_start_shallow_copy", 00:05:25.707 "bdev_lvol_grow_lvstore", 00:05:25.707 "bdev_lvol_get_lvols", 00:05:25.707 "bdev_lvol_get_lvstores", 00:05:25.707 "bdev_lvol_delete", 00:05:25.707 "bdev_lvol_set_read_only", 00:05:25.707 "bdev_lvol_resize", 00:05:25.707 "bdev_lvol_decouple_parent", 00:05:25.707 "bdev_lvol_inflate", 00:05:25.707 "bdev_lvol_rename", 00:05:25.707 "bdev_lvol_clone_bdev", 00:05:25.707 "bdev_lvol_clone", 00:05:25.707 "bdev_lvol_snapshot", 00:05:25.707 "bdev_lvol_create", 00:05:25.707 "bdev_lvol_delete_lvstore", 00:05:25.707 "bdev_lvol_rename_lvstore", 00:05:25.707 "bdev_lvol_create_lvstore", 00:05:25.707 "bdev_raid_set_options", 00:05:25.707 "bdev_raid_remove_base_bdev", 00:05:25.707 "bdev_raid_add_base_bdev", 00:05:25.707 "bdev_raid_delete", 00:05:25.707 "bdev_raid_create", 00:05:25.707 "bdev_raid_get_bdevs", 00:05:25.707 "bdev_error_inject_error", 00:05:25.707 "bdev_error_delete", 00:05:25.707 "bdev_error_create", 00:05:25.707 "bdev_split_delete", 00:05:25.707 "bdev_split_create", 00:05:25.707 "bdev_delay_delete", 00:05:25.707 "bdev_delay_create", 00:05:25.707 "bdev_delay_update_latency", 00:05:25.707 "bdev_zone_block_delete", 00:05:25.707 "bdev_zone_block_create", 00:05:25.707 "blobfs_create", 00:05:25.707 "blobfs_detect", 00:05:25.707 "blobfs_set_cache_size", 00:05:25.707 "bdev_aio_delete", 00:05:25.707 "bdev_aio_rescan", 00:05:25.707 "bdev_aio_create", 00:05:25.707 "bdev_ftl_set_property", 00:05:25.707 "bdev_ftl_get_properties", 00:05:25.707 "bdev_ftl_get_stats", 00:05:25.707 "bdev_ftl_unmap", 00:05:25.707 "bdev_ftl_unload", 00:05:25.707 "bdev_ftl_delete", 00:05:25.707 "bdev_ftl_load", 00:05:25.707 "bdev_ftl_create", 00:05:25.707 "bdev_virtio_attach_controller", 00:05:25.707 "bdev_virtio_scsi_get_devices", 00:05:25.707 "bdev_virtio_detach_controller", 00:05:25.707 "bdev_virtio_blk_set_hotplug", 00:05:25.707 "bdev_iscsi_delete", 00:05:25.707 "bdev_iscsi_create", 00:05:25.707 "bdev_iscsi_set_options", 00:05:25.707 "accel_error_inject_error", 00:05:25.707 "ioat_scan_accel_module", 00:05:25.707 "dsa_scan_accel_module", 00:05:25.707 "iaa_scan_accel_module", 00:05:25.707 "vfu_virtio_create_fs_endpoint", 00:05:25.707 "vfu_virtio_create_scsi_endpoint", 00:05:25.707 "vfu_virtio_scsi_remove_target", 00:05:25.707 "vfu_virtio_scsi_add_target", 00:05:25.707 "vfu_virtio_create_blk_endpoint", 00:05:25.707 "vfu_virtio_delete_endpoint", 00:05:25.707 "keyring_file_remove_key", 00:05:25.707 "keyring_file_add_key", 00:05:25.707 "keyring_linux_set_options", 00:05:25.707 "fsdev_aio_delete", 00:05:25.707 "fsdev_aio_create", 00:05:25.707 "iscsi_get_histogram", 00:05:25.707 "iscsi_enable_histogram", 00:05:25.707 "iscsi_set_options", 00:05:25.707 "iscsi_get_auth_groups", 00:05:25.707 "iscsi_auth_group_remove_secret", 00:05:25.707 "iscsi_auth_group_add_secret", 00:05:25.707 "iscsi_delete_auth_group", 00:05:25.707 "iscsi_create_auth_group", 00:05:25.707 "iscsi_set_discovery_auth", 00:05:25.707 "iscsi_get_options", 00:05:25.707 "iscsi_target_node_request_logout", 00:05:25.707 "iscsi_target_node_set_redirect", 00:05:25.707 "iscsi_target_node_set_auth", 00:05:25.707 "iscsi_target_node_add_lun", 00:05:25.707 "iscsi_get_stats", 00:05:25.707 "iscsi_get_connections", 00:05:25.707 "iscsi_portal_group_set_auth", 00:05:25.707 "iscsi_start_portal_group", 00:05:25.707 "iscsi_delete_portal_group", 00:05:25.707 "iscsi_create_portal_group", 00:05:25.707 "iscsi_get_portal_groups", 00:05:25.707 "iscsi_delete_target_node", 00:05:25.707 "iscsi_target_node_remove_pg_ig_maps", 00:05:25.707 "iscsi_target_node_add_pg_ig_maps", 00:05:25.707 "iscsi_create_target_node", 00:05:25.707 "iscsi_get_target_nodes", 00:05:25.707 "iscsi_delete_initiator_group", 00:05:25.707 "iscsi_initiator_group_remove_initiators", 00:05:25.707 "iscsi_initiator_group_add_initiators", 00:05:25.707 "iscsi_create_initiator_group", 00:05:25.707 "iscsi_get_initiator_groups", 00:05:25.707 "nvmf_set_crdt", 00:05:25.707 "nvmf_set_config", 00:05:25.707 "nvmf_set_max_subsystems", 00:05:25.707 "nvmf_stop_mdns_prr", 00:05:25.707 "nvmf_publish_mdns_prr", 00:05:25.707 "nvmf_subsystem_get_listeners", 00:05:25.707 "nvmf_subsystem_get_qpairs", 00:05:25.707 "nvmf_subsystem_get_controllers", 00:05:25.707 "nvmf_get_stats", 00:05:25.707 "nvmf_get_transports", 00:05:25.707 "nvmf_create_transport", 00:05:25.707 "nvmf_get_targets", 00:05:25.707 "nvmf_delete_target", 00:05:25.707 "nvmf_create_target", 00:05:25.707 "nvmf_subsystem_allow_any_host", 00:05:25.707 "nvmf_subsystem_set_keys", 00:05:25.707 "nvmf_subsystem_remove_host", 00:05:25.707 "nvmf_subsystem_add_host", 00:05:25.707 "nvmf_ns_remove_host", 00:05:25.707 "nvmf_ns_add_host", 00:05:25.707 "nvmf_subsystem_remove_ns", 00:05:25.707 "nvmf_subsystem_set_ns_ana_group", 00:05:25.707 "nvmf_subsystem_add_ns", 00:05:25.708 "nvmf_subsystem_listener_set_ana_state", 00:05:25.708 "nvmf_discovery_get_referrals", 00:05:25.708 "nvmf_discovery_remove_referral", 00:05:25.708 "nvmf_discovery_add_referral", 00:05:25.708 "nvmf_subsystem_remove_listener", 00:05:25.708 "nvmf_subsystem_add_listener", 00:05:25.708 "nvmf_delete_subsystem", 00:05:25.708 "nvmf_create_subsystem", 00:05:25.708 "nvmf_get_subsystems", 00:05:25.708 "env_dpdk_get_mem_stats", 00:05:25.708 "nbd_get_disks", 00:05:25.708 "nbd_stop_disk", 00:05:25.708 "nbd_start_disk", 00:05:25.708 "ublk_recover_disk", 00:05:25.708 "ublk_get_disks", 00:05:25.708 "ublk_stop_disk", 00:05:25.708 "ublk_start_disk", 00:05:25.708 "ublk_destroy_target", 00:05:25.708 "ublk_create_target", 00:05:25.708 "virtio_blk_create_transport", 00:05:25.708 "virtio_blk_get_transports", 00:05:25.708 "vhost_controller_set_coalescing", 00:05:25.708 "vhost_get_controllers", 00:05:25.708 "vhost_delete_controller", 00:05:25.708 "vhost_create_blk_controller", 00:05:25.708 "vhost_scsi_controller_remove_target", 00:05:25.708 "vhost_scsi_controller_add_target", 00:05:25.708 "vhost_start_scsi_controller", 00:05:25.708 "vhost_create_scsi_controller", 00:05:25.708 "thread_set_cpumask", 00:05:25.708 "scheduler_set_options", 00:05:25.708 "framework_get_governor", 00:05:25.708 "framework_get_scheduler", 00:05:25.708 "framework_set_scheduler", 00:05:25.708 "framework_get_reactors", 00:05:25.708 "thread_get_io_channels", 00:05:25.708 "thread_get_pollers", 00:05:25.708 "thread_get_stats", 00:05:25.708 "framework_monitor_context_switch", 00:05:25.708 "spdk_kill_instance", 00:05:25.708 "log_enable_timestamps", 00:05:25.708 "log_get_flags", 00:05:25.708 "log_clear_flag", 00:05:25.708 "log_set_flag", 00:05:25.708 "log_get_level", 00:05:25.708 "log_set_level", 00:05:25.708 "log_get_print_level", 00:05:25.708 "log_set_print_level", 00:05:25.708 "framework_enable_cpumask_locks", 00:05:25.708 "framework_disable_cpumask_locks", 00:05:25.708 "framework_wait_init", 00:05:25.708 "framework_start_init", 00:05:25.708 "scsi_get_devices", 00:05:25.708 "bdev_get_histogram", 00:05:25.708 "bdev_enable_histogram", 00:05:25.708 "bdev_set_qos_limit", 00:05:25.708 "bdev_set_qd_sampling_period", 00:05:25.708 "bdev_get_bdevs", 00:05:25.708 "bdev_reset_iostat", 00:05:25.708 "bdev_get_iostat", 00:05:25.708 "bdev_examine", 00:05:25.708 "bdev_wait_for_examine", 00:05:25.708 "bdev_set_options", 00:05:25.708 "accel_get_stats", 00:05:25.708 "accel_set_options", 00:05:25.708 "accel_set_driver", 00:05:25.708 "accel_crypto_key_destroy", 00:05:25.708 "accel_crypto_keys_get", 00:05:25.708 "accel_crypto_key_create", 00:05:25.708 "accel_assign_opc", 00:05:25.708 "accel_get_module_info", 00:05:25.708 "accel_get_opc_assignments", 00:05:25.708 "vmd_rescan", 00:05:25.708 "vmd_remove_device", 00:05:25.708 "vmd_enable", 00:05:25.708 "sock_get_default_impl", 00:05:25.708 "sock_set_default_impl", 00:05:25.708 "sock_impl_set_options", 00:05:25.708 "sock_impl_get_options", 00:05:25.708 "iobuf_get_stats", 00:05:25.708 "iobuf_set_options", 00:05:25.708 "keyring_get_keys", 00:05:25.708 "vfu_tgt_set_base_path", 00:05:25.708 "framework_get_pci_devices", 00:05:25.708 "framework_get_config", 00:05:25.708 "framework_get_subsystems", 00:05:25.708 "fsdev_set_opts", 00:05:25.708 "fsdev_get_opts", 00:05:25.708 "trace_get_info", 00:05:25.708 "trace_get_tpoint_group_mask", 00:05:25.708 "trace_disable_tpoint_group", 00:05:25.708 "trace_enable_tpoint_group", 00:05:25.708 "trace_clear_tpoint_mask", 00:05:25.708 "trace_set_tpoint_mask", 00:05:25.708 "notify_get_notifications", 00:05:25.708 "notify_get_types", 00:05:25.708 "spdk_get_version", 00:05:25.708 "rpc_get_methods" 00:05:25.708 ] 00:05:25.708 16:00:01 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:25.708 16:00:01 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:25.708 16:00:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.708 16:00:01 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:25.708 16:00:01 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1047210 00:05:25.708 16:00:01 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1047210 ']' 00:05:25.708 16:00:01 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1047210 00:05:25.708 16:00:01 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:25.708 16:00:01 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.708 16:00:01 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1047210 00:05:25.708 16:00:01 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.708 16:00:01 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.708 16:00:01 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1047210' 00:05:25.708 killing process with pid 1047210 00:05:25.708 16:00:01 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1047210 00:05:25.708 16:00:01 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1047210 00:05:25.967 00:05:25.967 real 0m1.559s 00:05:25.967 user 0m2.864s 00:05:25.967 sys 0m0.461s 00:05:25.967 16:00:01 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.967 16:00:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.967 ************************************ 00:05:25.968 END TEST spdkcli_tcp 00:05:25.968 ************************************ 00:05:25.968 16:00:01 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:25.968 16:00:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.968 16:00:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.968 16:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:25.968 ************************************ 00:05:25.968 START TEST dpdk_mem_utility 00:05:25.968 ************************************ 00:05:25.968 16:00:01 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:26.228 * Looking for test storage... 00:05:26.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:26.228 16:00:01 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.228 16:00:01 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.228 16:00:01 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.228 16:00:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.228 16:00:02 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:26.228 16:00:02 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.228 16:00:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.228 --rc genhtml_branch_coverage=1 00:05:26.228 --rc genhtml_function_coverage=1 00:05:26.228 --rc genhtml_legend=1 00:05:26.228 --rc geninfo_all_blocks=1 00:05:26.228 --rc geninfo_unexecuted_blocks=1 00:05:26.228 00:05:26.228 ' 00:05:26.228 16:00:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.228 --rc genhtml_branch_coverage=1 00:05:26.228 --rc genhtml_function_coverage=1 00:05:26.228 --rc genhtml_legend=1 00:05:26.228 --rc geninfo_all_blocks=1 00:05:26.228 --rc geninfo_unexecuted_blocks=1 00:05:26.228 00:05:26.228 ' 00:05:26.228 16:00:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.228 --rc genhtml_branch_coverage=1 00:05:26.228 --rc genhtml_function_coverage=1 00:05:26.228 --rc genhtml_legend=1 00:05:26.228 --rc geninfo_all_blocks=1 00:05:26.228 --rc geninfo_unexecuted_blocks=1 00:05:26.228 00:05:26.228 ' 00:05:26.228 16:00:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.228 --rc genhtml_branch_coverage=1 00:05:26.228 --rc genhtml_function_coverage=1 00:05:26.228 --rc genhtml_legend=1 00:05:26.228 --rc geninfo_all_blocks=1 00:05:26.228 --rc geninfo_unexecuted_blocks=1 00:05:26.228 00:05:26.228 ' 00:05:26.228 16:00:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:26.228 16:00:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1047662 00:05:26.228 16:00:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1047662 00:05:26.228 16:00:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1047662 ']' 00:05:26.228 16:00:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.228 16:00:02 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.228 16:00:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.228 16:00:02 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.228 16:00:02 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.228 16:00:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:26.228 [2024-11-20 16:00:02.143700] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:05:26.228 [2024-11-20 16:00:02.143780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047662 ] 00:05:26.489 [2024-11-20 16:00:02.232214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.489 [2024-11-20 16:00:02.267454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.062 16:00:02 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.062 16:00:02 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:27.062 16:00:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:27.062 16:00:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:27.062 16:00:02 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.062 16:00:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:27.063 { 00:05:27.063 "filename": "/tmp/spdk_mem_dump.txt" 00:05:27.063 } 00:05:27.063 16:00:02 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.063 16:00:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:27.063 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:27.063 1 heaps totaling size 810.000000 MiB 00:05:27.063 size: 810.000000 MiB heap id: 0 00:05:27.063 end heaps---------- 00:05:27.063 9 mempools totaling size 595.772034 MiB 00:05:27.063 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:27.063 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:27.063 size: 92.545471 MiB name: bdev_io_1047662 00:05:27.063 size: 50.003479 MiB name: msgpool_1047662 00:05:27.063 size: 36.509338 MiB name: fsdev_io_1047662 00:05:27.063 size: 21.763794 MiB name: PDU_Pool 00:05:27.063 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:27.063 size: 4.133484 MiB name: evtpool_1047662 00:05:27.063 size: 0.026123 MiB name: Session_Pool 00:05:27.063 end mempools------- 00:05:27.063 6 memzones totaling size 4.142822 MiB 00:05:27.063 size: 1.000366 MiB name: RG_ring_0_1047662 00:05:27.063 size: 1.000366 MiB name: RG_ring_1_1047662 00:05:27.063 size: 1.000366 MiB name: RG_ring_4_1047662 00:05:27.063 size: 1.000366 MiB name: RG_ring_5_1047662 00:05:27.063 size: 0.125366 MiB name: RG_ring_2_1047662 00:05:27.063 size: 0.015991 MiB name: RG_ring_3_1047662 00:05:27.063 end memzones------- 00:05:27.063 16:00:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:27.324 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:27.324 list of free elements. size: 10.862488 MiB 00:05:27.324 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:27.324 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:27.324 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:27.324 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:27.324 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:27.324 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:27.324 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:27.324 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:27.324 element at address: 0x20001a600000 with size: 0.582886 MiB 00:05:27.324 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:27.324 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:27.324 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:27.324 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:27.324 element at address: 0x200027a00000 with size: 0.410034 MiB 00:05:27.324 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:27.324 list of standard malloc elements. size: 199.218628 MiB 00:05:27.324 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:27.324 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:27.324 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:27.324 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:27.324 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:27.324 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:27.324 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:27.324 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:27.324 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:27.324 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:27.324 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:27.324 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:27.324 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:27.324 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:27.324 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:27.324 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:27.324 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:27.324 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:27.324 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:27.324 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:27.324 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:27.324 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:27.324 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:27.324 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:27.324 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:27.324 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:27.324 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:27.324 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:27.324 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:27.324 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:27.324 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:27.324 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:27.324 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:27.324 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:27.324 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:27.324 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:27.324 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:27.324 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:27.324 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:27.324 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:05:27.324 element at address: 0x200027a69040 with size: 0.000183 MiB 00:05:27.324 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:05:27.324 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:27.324 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:27.324 list of memzone associated elements. size: 599.918884 MiB 00:05:27.324 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:27.324 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:27.324 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:27.324 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:27.324 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:27.324 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1047662_0 00:05:27.324 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:27.324 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1047662_0 00:05:27.324 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:27.324 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1047662_0 00:05:27.324 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:27.324 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:27.324 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:27.324 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:27.324 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:27.324 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1047662_0 00:05:27.324 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:27.324 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1047662 00:05:27.324 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:27.324 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1047662 00:05:27.324 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:27.324 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:27.324 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:27.324 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:27.325 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:27.325 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:27.325 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:27.325 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:27.325 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:27.325 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1047662 00:05:27.325 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:27.325 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1047662 00:05:27.325 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:27.325 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1047662 00:05:27.325 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:27.325 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1047662 00:05:27.325 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:27.325 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1047662 00:05:27.325 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:27.325 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1047662 00:05:27.325 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:27.325 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:27.325 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:27.325 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:27.325 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:27.325 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:27.325 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:27.325 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1047662 00:05:27.325 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:27.325 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1047662 00:05:27.325 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:27.325 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:27.325 element at address: 0x200027a69100 with size: 0.023743 MiB 00:05:27.325 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:27.325 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:27.325 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1047662 00:05:27.325 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:05:27.325 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:27.325 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:27.325 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1047662 00:05:27.325 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:27.325 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1047662 00:05:27.325 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:27.325 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1047662 00:05:27.325 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:05:27.325 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:27.325 16:00:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:27.325 16:00:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1047662 00:05:27.325 16:00:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1047662 ']' 00:05:27.325 16:00:03 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1047662 00:05:27.325 16:00:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:27.325 16:00:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.325 16:00:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1047662 00:05:27.325 16:00:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.325 16:00:03 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.325 16:00:03 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1047662' 00:05:27.325 killing process with pid 1047662 00:05:27.325 16:00:03 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1047662 00:05:27.325 16:00:03 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1047662 00:05:27.585 00:05:27.585 real 0m1.407s 00:05:27.585 user 0m1.473s 00:05:27.585 sys 0m0.426s 00:05:27.585 16:00:03 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.585 16:00:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:27.585 ************************************ 00:05:27.585 END TEST dpdk_mem_utility 00:05:27.585 ************************************ 00:05:27.585 16:00:03 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:27.585 16:00:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.585 16:00:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.585 16:00:03 -- common/autotest_common.sh@10 -- # set +x 00:05:27.585 ************************************ 00:05:27.585 START TEST event 00:05:27.585 ************************************ 00:05:27.585 16:00:03 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:27.585 * Looking for test storage... 00:05:27.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:27.585 16:00:03 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:27.585 16:00:03 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:27.585 16:00:03 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:27.846 16:00:03 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:27.846 16:00:03 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.846 16:00:03 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.846 16:00:03 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.846 16:00:03 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.846 16:00:03 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.846 16:00:03 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.846 16:00:03 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.846 16:00:03 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.846 16:00:03 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.846 16:00:03 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.846 16:00:03 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.846 16:00:03 event -- scripts/common.sh@344 -- # case "$op" in 00:05:27.846 16:00:03 event -- scripts/common.sh@345 -- # : 1 00:05:27.846 16:00:03 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.846 16:00:03 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.846 16:00:03 event -- scripts/common.sh@365 -- # decimal 1 00:05:27.846 16:00:03 event -- scripts/common.sh@353 -- # local d=1 00:05:27.846 16:00:03 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.846 16:00:03 event -- scripts/common.sh@355 -- # echo 1 00:05:27.846 16:00:03 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.846 16:00:03 event -- scripts/common.sh@366 -- # decimal 2 00:05:27.846 16:00:03 event -- scripts/common.sh@353 -- # local d=2 00:05:27.846 16:00:03 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.846 16:00:03 event -- scripts/common.sh@355 -- # echo 2 00:05:27.846 16:00:03 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.846 16:00:03 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.846 16:00:03 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.846 16:00:03 event -- scripts/common.sh@368 -- # return 0 00:05:27.846 16:00:03 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.846 16:00:03 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:27.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.846 --rc genhtml_branch_coverage=1 00:05:27.846 --rc genhtml_function_coverage=1 00:05:27.846 --rc genhtml_legend=1 00:05:27.846 --rc geninfo_all_blocks=1 00:05:27.846 --rc geninfo_unexecuted_blocks=1 00:05:27.846 00:05:27.846 ' 00:05:27.846 16:00:03 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:27.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.846 --rc genhtml_branch_coverage=1 00:05:27.846 --rc genhtml_function_coverage=1 00:05:27.846 --rc genhtml_legend=1 00:05:27.846 --rc geninfo_all_blocks=1 00:05:27.846 --rc geninfo_unexecuted_blocks=1 00:05:27.846 00:05:27.846 ' 00:05:27.846 16:00:03 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:27.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.846 --rc genhtml_branch_coverage=1 00:05:27.846 --rc genhtml_function_coverage=1 00:05:27.846 --rc genhtml_legend=1 00:05:27.846 --rc geninfo_all_blocks=1 00:05:27.846 --rc geninfo_unexecuted_blocks=1 00:05:27.846 00:05:27.846 ' 00:05:27.846 16:00:03 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:27.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.846 --rc genhtml_branch_coverage=1 00:05:27.846 --rc genhtml_function_coverage=1 00:05:27.846 --rc genhtml_legend=1 00:05:27.846 --rc geninfo_all_blocks=1 00:05:27.846 --rc geninfo_unexecuted_blocks=1 00:05:27.846 00:05:27.846 ' 00:05:27.846 16:00:03 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:27.846 16:00:03 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:27.846 16:00:03 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:27.846 16:00:03 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:27.846 16:00:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.846 16:00:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.846 ************************************ 00:05:27.846 START TEST event_perf 00:05:27.846 ************************************ 00:05:27.846 16:00:03 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:27.846 Running I/O for 1 seconds...[2024-11-20 16:00:03.614781] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:05:27.846 [2024-11-20 16:00:03.614894] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1048008 ] 00:05:27.846 [2024-11-20 16:00:03.708383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:27.846 [2024-11-20 16:00:03.751538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.846 [2024-11-20 16:00:03.751692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.846 [2024-11-20 16:00:03.751730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.846 Running I/O for 1 seconds...[2024-11-20 16:00:03.751731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.228 00:05:29.228 lcore 0: 173139 00:05:29.228 lcore 1: 173142 00:05:29.228 lcore 2: 173144 00:05:29.228 lcore 3: 173141 00:05:29.228 done. 00:05:29.228 00:05:29.228 real 0m1.187s 00:05:29.228 user 0m4.089s 00:05:29.228 sys 0m0.094s 00:05:29.228 16:00:04 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.228 16:00:04 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:29.228 ************************************ 00:05:29.228 END TEST event_perf 00:05:29.229 ************************************ 00:05:29.229 16:00:04 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:29.229 16:00:04 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:29.229 16:00:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.229 16:00:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.229 ************************************ 00:05:29.229 START TEST event_reactor 00:05:29.229 ************************************ 00:05:29.229 16:00:04 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:29.229 [2024-11-20 16:00:04.875605] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:05:29.229 [2024-11-20 16:00:04.875703] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1048368 ] 00:05:29.229 [2024-11-20 16:00:04.966208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.229 [2024-11-20 16:00:05.003684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.171 test_start 00:05:30.171 oneshot 00:05:30.171 tick 100 00:05:30.171 tick 100 00:05:30.171 tick 250 00:05:30.171 tick 100 00:05:30.171 tick 100 00:05:30.171 tick 100 00:05:30.171 tick 250 00:05:30.171 tick 500 00:05:30.171 tick 100 00:05:30.171 tick 100 00:05:30.171 tick 250 00:05:30.171 tick 100 00:05:30.171 tick 100 00:05:30.171 test_end 00:05:30.171 00:05:30.171 real 0m1.176s 00:05:30.171 user 0m1.089s 00:05:30.171 sys 0m0.083s 00:05:30.171 16:00:06 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.171 16:00:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:30.171 ************************************ 00:05:30.171 END TEST event_reactor 00:05:30.171 ************************************ 00:05:30.171 16:00:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:30.171 16:00:06 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:30.171 16:00:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.171 16:00:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.432 ************************************ 00:05:30.432 START TEST event_reactor_perf 00:05:30.432 ************************************ 00:05:30.432 16:00:06 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:30.432 [2024-11-20 16:00:06.130654] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:05:30.432 [2024-11-20 16:00:06.130761] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1048716 ] 00:05:30.432 [2024-11-20 16:00:06.217376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.432 [2024-11-20 16:00:06.248403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.373 test_start 00:05:31.373 test_end 00:05:31.373 Performance: 540909 events per second 00:05:31.373 00:05:31.373 real 0m1.164s 00:05:31.373 user 0m1.083s 00:05:31.373 sys 0m0.078s 00:05:31.373 16:00:07 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.373 16:00:07 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.373 ************************************ 00:05:31.373 END TEST event_reactor_perf 00:05:31.373 ************************************ 00:05:31.634 16:00:07 event -- event/event.sh@49 -- # uname -s 00:05:31.634 16:00:07 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:31.634 16:00:07 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:31.634 16:00:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.634 16:00:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.634 16:00:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.634 ************************************ 00:05:31.634 START TEST event_scheduler 00:05:31.634 ************************************ 00:05:31.634 16:00:07 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:31.634 * Looking for test storage... 00:05:31.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:31.634 16:00:07 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:31.634 16:00:07 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:31.634 16:00:07 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:31.634 16:00:07 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.634 16:00:07 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:31.634 16:00:07 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.634 16:00:07 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:31.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.634 --rc genhtml_branch_coverage=1 00:05:31.634 --rc genhtml_function_coverage=1 00:05:31.634 --rc genhtml_legend=1 00:05:31.634 --rc geninfo_all_blocks=1 00:05:31.634 --rc geninfo_unexecuted_blocks=1 00:05:31.634 00:05:31.634 ' 00:05:31.634 16:00:07 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:31.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.634 --rc genhtml_branch_coverage=1 00:05:31.634 --rc genhtml_function_coverage=1 00:05:31.634 --rc genhtml_legend=1 00:05:31.634 --rc geninfo_all_blocks=1 00:05:31.634 --rc geninfo_unexecuted_blocks=1 00:05:31.634 00:05:31.634 ' 00:05:31.634 16:00:07 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:31.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.634 --rc genhtml_branch_coverage=1 00:05:31.634 --rc genhtml_function_coverage=1 00:05:31.634 --rc genhtml_legend=1 00:05:31.634 --rc geninfo_all_blocks=1 00:05:31.634 --rc geninfo_unexecuted_blocks=1 00:05:31.634 00:05:31.634 ' 00:05:31.634 16:00:07 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:31.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.634 --rc genhtml_branch_coverage=1 00:05:31.634 --rc genhtml_function_coverage=1 00:05:31.634 --rc genhtml_legend=1 00:05:31.634 --rc geninfo_all_blocks=1 00:05:31.634 --rc geninfo_unexecuted_blocks=1 00:05:31.634 00:05:31.634 ' 00:05:31.635 16:00:07 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:31.635 16:00:07 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1049012 00:05:31.635 16:00:07 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.635 16:00:07 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1049012 00:05:31.635 16:00:07 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:31.635 16:00:07 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1049012 ']' 00:05:31.635 16:00:07 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.635 16:00:07 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.635 16:00:07 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.635 16:00:07 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.635 16:00:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.895 [2024-11-20 16:00:07.613759] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:05:31.895 [2024-11-20 16:00:07.613831] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1049012 ] 00:05:31.895 [2024-11-20 16:00:07.708735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:31.895 [2024-11-20 16:00:07.764292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.895 [2024-11-20 16:00:07.764586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.895 [2024-11-20 16:00:07.764750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.895 [2024-11-20 16:00:07.764750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.837 16:00:08 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.837 16:00:08 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:32.837 16:00:08 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:32.837 16:00:08 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.837 16:00:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.837 [2024-11-20 16:00:08.439215] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:32.837 [2024-11-20 16:00:08.439234] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:32.837 [2024-11-20 16:00:08.439244] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:32.837 [2024-11-20 16:00:08.439251] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:32.837 [2024-11-20 16:00:08.439256] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:32.837 16:00:08 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.837 16:00:08 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:32.837 16:00:08 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.837 16:00:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.837 [2024-11-20 16:00:08.506189] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:32.837 16:00:08 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.837 16:00:08 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:32.837 16:00:08 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.837 16:00:08 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.837 16:00:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.837 ************************************ 00:05:32.837 START TEST scheduler_create_thread 00:05:32.837 ************************************ 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.837 2 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.837 3 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.837 4 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.837 5 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.837 6 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.837 7 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.837 8 00:05:32.837 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.838 16:00:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:32.838 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.838 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.838 9 00:05:32.838 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.838 16:00:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:32.838 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.838 16:00:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.410 10 00:05:33.410 16:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.410 16:00:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:33.410 16:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.410 16:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.793 16:00:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.793 16:00:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:34.793 16:00:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:34.793 16:00:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.793 16:00:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.363 16:00:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.363 16:00:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:35.363 16:00:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.363 16:00:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.303 16:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.303 16:00:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:36.303 16:00:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:36.303 16:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.303 16:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.875 16:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.875 00:05:36.875 real 0m4.226s 00:05:36.875 user 0m0.026s 00:05:36.875 sys 0m0.006s 00:05:36.875 16:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.875 16:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.875 ************************************ 00:05:36.875 END TEST scheduler_create_thread 00:05:36.875 ************************************ 00:05:36.875 16:00:12 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:36.875 16:00:12 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1049012 00:05:36.875 16:00:12 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1049012 ']' 00:05:36.875 16:00:12 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1049012 00:05:36.875 16:00:12 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:37.192 16:00:12 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.192 16:00:12 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1049012 00:05:37.192 16:00:12 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:37.192 16:00:12 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:37.192 16:00:12 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1049012' 00:05:37.192 killing process with pid 1049012 00:05:37.192 16:00:12 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1049012 00:05:37.192 16:00:12 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1049012 00:05:37.192 [2024-11-20 16:00:13.043892] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:37.483 00:05:37.483 real 0m5.841s 00:05:37.483 user 0m12.905s 00:05:37.483 sys 0m0.422s 00:05:37.483 16:00:13 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.483 16:00:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.483 ************************************ 00:05:37.483 END TEST event_scheduler 00:05:37.483 ************************************ 00:05:37.483 16:00:13 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:37.483 16:00:13 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:37.483 16:00:13 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.483 16:00:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.483 16:00:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.483 ************************************ 00:05:37.483 START TEST app_repeat 00:05:37.483 ************************************ 00:05:37.483 16:00:13 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:37.483 16:00:13 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.483 16:00:13 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.483 16:00:13 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:37.483 16:00:13 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.483 16:00:13 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:37.483 16:00:13 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:37.483 16:00:13 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:37.483 16:00:13 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1050639 00:05:37.483 16:00:13 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.483 16:00:13 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:37.483 16:00:13 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1050639' 00:05:37.483 Process app_repeat pid: 1050639 00:05:37.483 16:00:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:37.483 16:00:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:37.483 spdk_app_start Round 0 00:05:37.483 16:00:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1050639 /var/tmp/spdk-nbd.sock 00:05:37.483 16:00:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1050639 ']' 00:05:37.483 16:00:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.483 16:00:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.483 16:00:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.483 16:00:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.483 16:00:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.483 [2024-11-20 16:00:13.322663] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:05:37.483 [2024-11-20 16:00:13.322764] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1050639 ] 00:05:37.780 [2024-11-20 16:00:13.414880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.780 [2024-11-20 16:00:13.446499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.780 [2024-11-20 16:00:13.446500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.780 16:00:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.780 16:00:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:37.780 16:00:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.780 Malloc0 00:05:38.040 16:00:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.040 Malloc1 00:05:38.040 16:00:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.040 16:00:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.040 16:00:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.040 16:00:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.040 16:00:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.040 16:00:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.040 16:00:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.040 16:00:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.040 16:00:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.040 16:00:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.040 16:00:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.040 16:00:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.040 16:00:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:38.040 16:00:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.040 16:00:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.040 16:00:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.302 /dev/nbd0 00:05:38.302 16:00:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.302 16:00:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.302 16:00:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:38.302 16:00:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:38.302 16:00:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.302 16:00:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.302 16:00:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:38.302 16:00:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:38.302 16:00:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.302 16:00:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.302 16:00:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.302 1+0 records in 00:05:38.302 1+0 records out 00:05:38.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273094 s, 15.0 MB/s 00:05:38.302 16:00:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.302 16:00:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:38.302 16:00:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.302 16:00:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.302 16:00:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:38.302 16:00:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.302 16:00:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.302 16:00:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.563 /dev/nbd1 00:05:38.563 16:00:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.563 16:00:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.563 16:00:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:38.563 16:00:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:38.563 16:00:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.563 16:00:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.563 16:00:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:38.563 16:00:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:38.563 16:00:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.563 16:00:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.564 16:00:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.564 1+0 records in 00:05:38.564 1+0 records out 00:05:38.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285719 s, 14.3 MB/s 00:05:38.564 16:00:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.564 16:00:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:38.564 16:00:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.564 16:00:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.564 16:00:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:38.564 16:00:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.564 16:00:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.564 16:00:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.564 16:00:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.564 16:00:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.825 { 00:05:38.825 "nbd_device": "/dev/nbd0", 00:05:38.825 "bdev_name": "Malloc0" 00:05:38.825 }, 00:05:38.825 { 00:05:38.825 "nbd_device": "/dev/nbd1", 00:05:38.825 "bdev_name": "Malloc1" 00:05:38.825 } 00:05:38.825 ]' 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.825 { 00:05:38.825 "nbd_device": "/dev/nbd0", 00:05:38.825 "bdev_name": "Malloc0" 00:05:38.825 }, 00:05:38.825 { 00:05:38.825 "nbd_device": "/dev/nbd1", 00:05:38.825 "bdev_name": "Malloc1" 00:05:38.825 } 00:05:38.825 ]' 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.825 /dev/nbd1' 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.825 /dev/nbd1' 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.825 256+0 records in 00:05:38.825 256+0 records out 00:05:38.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127591 s, 82.2 MB/s 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.825 256+0 records in 00:05:38.825 256+0 records out 00:05:38.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119902 s, 87.5 MB/s 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.825 256+0 records in 00:05:38.825 256+0 records out 00:05:38.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132033 s, 79.4 MB/s 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.825 16:00:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.086 16:00:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.086 16:00:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.086 16:00:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.086 16:00:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.086 16:00:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.086 16:00:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.086 16:00:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.086 16:00:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.086 16:00:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.086 16:00:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.346 16:00:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.346 16:00:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.346 16:00:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.346 16:00:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.346 16:00:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.346 16:00:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.346 16:00:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.346 16:00:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.346 16:00:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.346 16:00:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.346 16:00:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.346 16:00:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.346 16:00:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.346 16:00:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.606 16:00:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.606 16:00:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.606 16:00:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.606 16:00:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.606 16:00:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.606 16:00:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.606 16:00:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.606 16:00:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.606 16:00:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.606 16:00:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:39.606 16:00:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:39.867 [2024-11-20 16:00:15.576834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.867 [2024-11-20 16:00:15.604841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.867 [2024-11-20 16:00:15.604841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.867 [2024-11-20 16:00:15.634136] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.867 [2024-11-20 16:00:15.634171] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.166 16:00:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:43.166 16:00:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:43.166 spdk_app_start Round 1 00:05:43.166 16:00:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1050639 /var/tmp/spdk-nbd.sock 00:05:43.166 16:00:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1050639 ']' 00:05:43.166 16:00:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.166 16:00:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.166 16:00:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.166 16:00:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.166 16:00:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.166 16:00:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.166 16:00:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:43.166 16:00:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.166 Malloc0 00:05:43.166 16:00:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.166 Malloc1 00:05:43.166 16:00:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.166 16:00:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.166 16:00:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.166 16:00:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.166 16:00:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.166 16:00:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.166 16:00:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.166 16:00:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.166 16:00:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.166 16:00:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.166 16:00:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.166 16:00:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.166 16:00:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:43.166 16:00:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.166 16:00:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.166 16:00:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.426 /dev/nbd0 00:05:43.426 16:00:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.426 16:00:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.426 16:00:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:43.426 16:00:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:43.426 16:00:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:43.426 16:00:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:43.426 16:00:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:43.426 16:00:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:43.426 16:00:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:43.426 16:00:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:43.426 16:00:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.426 1+0 records in 00:05:43.426 1+0 records out 00:05:43.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274952 s, 14.9 MB/s 00:05:43.426 16:00:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.426 16:00:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:43.426 16:00:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.426 16:00:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:43.426 16:00:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:43.426 16:00:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.426 16:00:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.426 16:00:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.687 /dev/nbd1 00:05:43.687 16:00:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.687 16:00:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.687 16:00:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:43.687 16:00:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:43.687 16:00:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:43.687 16:00:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:43.687 16:00:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:43.687 16:00:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:43.687 16:00:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:43.687 16:00:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:43.687 16:00:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.687 1+0 records in 00:05:43.687 1+0 records out 00:05:43.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286517 s, 14.3 MB/s 00:05:43.687 16:00:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.688 16:00:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:43.688 16:00:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.688 16:00:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:43.688 16:00:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:43.688 16:00:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.688 16:00:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.688 16:00:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.688 16:00:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.688 16:00:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.950 { 00:05:43.950 "nbd_device": "/dev/nbd0", 00:05:43.950 "bdev_name": "Malloc0" 00:05:43.950 }, 00:05:43.950 { 00:05:43.950 "nbd_device": "/dev/nbd1", 00:05:43.950 "bdev_name": "Malloc1" 00:05:43.950 } 00:05:43.950 ]' 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.950 { 00:05:43.950 "nbd_device": "/dev/nbd0", 00:05:43.950 "bdev_name": "Malloc0" 00:05:43.950 }, 00:05:43.950 { 00:05:43.950 "nbd_device": "/dev/nbd1", 00:05:43.950 "bdev_name": "Malloc1" 00:05:43.950 } 00:05:43.950 ]' 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.950 /dev/nbd1' 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:43.950 /dev/nbd1' 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.950 256+0 records in 00:05:43.950 256+0 records out 00:05:43.950 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117703 s, 89.1 MB/s 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.950 256+0 records in 00:05:43.950 256+0 records out 00:05:43.950 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123239 s, 85.1 MB/s 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.950 256+0 records in 00:05:43.950 256+0 records out 00:05:43.950 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130812 s, 80.2 MB/s 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.950 16:00:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.211 16:00:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.211 16:00:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.211 16:00:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.211 16:00:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.211 16:00:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.211 16:00:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.211 16:00:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.211 16:00:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.211 16:00:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.211 16:00:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.472 16:00:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.472 16:00:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.472 16:00:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.472 16:00:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.472 16:00:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.472 16:00:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.472 16:00:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.472 16:00:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.472 16:00:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.472 16:00:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.472 16:00:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.732 16:00:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.732 16:00:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:44.732 16:00:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.732 16:00:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.732 16:00:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.732 16:00:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.732 16:00:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:44.732 16:00:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.732 16:00:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.732 16:00:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.732 16:00:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.732 16:00:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.732 16:00:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.993 16:00:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:44.993 [2024-11-20 16:00:20.748734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.993 [2024-11-20 16:00:20.777944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.993 [2024-11-20 16:00:20.777945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.993 [2024-11-20 16:00:20.807644] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:44.993 [2024-11-20 16:00:20.807674] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.293 16:00:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:48.293 16:00:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:48.293 spdk_app_start Round 2 00:05:48.293 16:00:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1050639 /var/tmp/spdk-nbd.sock 00:05:48.293 16:00:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1050639 ']' 00:05:48.293 16:00:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.293 16:00:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.293 16:00:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.293 16:00:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.293 16:00:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.293 16:00:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.293 16:00:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:48.293 16:00:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.293 Malloc0 00:05:48.293 16:00:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.293 Malloc1 00:05:48.293 16:00:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.293 16:00:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.293 16:00:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.293 16:00:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.293 16:00:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.293 16:00:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.293 16:00:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.293 16:00:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.293 16:00:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.293 16:00:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.293 16:00:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.293 16:00:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.293 16:00:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.293 16:00:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.293 16:00:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.293 16:00:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.555 /dev/nbd0 00:05:48.555 16:00:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.555 16:00:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.555 16:00:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:48.555 16:00:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:48.555 16:00:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.555 16:00:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.555 16:00:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:48.555 16:00:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:48.555 16:00:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.555 16:00:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.555 16:00:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.555 1+0 records in 00:05:48.555 1+0 records out 00:05:48.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270194 s, 15.2 MB/s 00:05:48.555 16:00:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.555 16:00:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:48.555 16:00:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.555 16:00:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:48.555 16:00:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:48.555 16:00:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.555 16:00:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.555 16:00:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.817 /dev/nbd1 00:05:48.817 16:00:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.817 16:00:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.817 16:00:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:48.817 16:00:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:48.817 16:00:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.817 16:00:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.817 16:00:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:48.817 16:00:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:48.817 16:00:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.817 16:00:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.817 16:00:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.817 1+0 records in 00:05:48.817 1+0 records out 00:05:48.817 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314948 s, 13.0 MB/s 00:05:48.817 16:00:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.817 16:00:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:48.817 16:00:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.817 16:00:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:48.817 16:00:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:48.817 16:00:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.817 16:00:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.817 16:00:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.817 16:00:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.817 16:00:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.077 16:00:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.077 { 00:05:49.078 "nbd_device": "/dev/nbd0", 00:05:49.078 "bdev_name": "Malloc0" 00:05:49.078 }, 00:05:49.078 { 00:05:49.078 "nbd_device": "/dev/nbd1", 00:05:49.078 "bdev_name": "Malloc1" 00:05:49.078 } 00:05:49.078 ]' 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.078 { 00:05:49.078 "nbd_device": "/dev/nbd0", 00:05:49.078 "bdev_name": "Malloc0" 00:05:49.078 }, 00:05:49.078 { 00:05:49.078 "nbd_device": "/dev/nbd1", 00:05:49.078 "bdev_name": "Malloc1" 00:05:49.078 } 00:05:49.078 ]' 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.078 /dev/nbd1' 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.078 /dev/nbd1' 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.078 256+0 records in 00:05:49.078 256+0 records out 00:05:49.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127305 s, 82.4 MB/s 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.078 256+0 records in 00:05:49.078 256+0 records out 00:05:49.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122467 s, 85.6 MB/s 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.078 256+0 records in 00:05:49.078 256+0 records out 00:05:49.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132166 s, 79.3 MB/s 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.078 16:00:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.337 16:00:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.338 16:00:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.338 16:00:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.338 16:00:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.338 16:00:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.338 16:00:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.338 16:00:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.338 16:00:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.338 16:00:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.338 16:00:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.597 16:00:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.597 16:00:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.597 16:00:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.597 16:00:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.597 16:00:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.597 16:00:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.598 16:00:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.598 16:00:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.598 16:00:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.598 16:00:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.598 16:00:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.857 16:00:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.857 16:00:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.857 16:00:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.857 16:00:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:49.857 16:00:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:49.857 16:00:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.857 16:00:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:49.857 16:00:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:49.857 16:00:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:49.857 16:00:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:49.857 16:00:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:49.857 16:00:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:49.857 16:00:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.857 16:00:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:50.117 [2024-11-20 16:00:25.857821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.117 [2024-11-20 16:00:25.886440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.117 [2024-11-20 16:00:25.886441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.117 [2024-11-20 16:00:25.915664] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.117 [2024-11-20 16:00:25.915695] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.416 16:00:28 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1050639 /var/tmp/spdk-nbd.sock 00:05:53.416 16:00:28 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1050639 ']' 00:05:53.416 16:00:28 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.416 16:00:28 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.416 16:00:28 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.416 16:00:28 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.416 16:00:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.416 16:00:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.416 16:00:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:53.416 16:00:28 event.app_repeat -- event/event.sh@39 -- # killprocess 1050639 00:05:53.416 16:00:28 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1050639 ']' 00:05:53.416 16:00:28 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1050639 00:05:53.416 16:00:28 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:53.416 16:00:28 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.416 16:00:28 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1050639 00:05:53.416 16:00:29 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.416 16:00:29 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.416 16:00:29 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1050639' 00:05:53.416 killing process with pid 1050639 00:05:53.416 16:00:29 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1050639 00:05:53.416 16:00:29 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1050639 00:05:53.416 spdk_app_start is called in Round 0. 00:05:53.416 Shutdown signal received, stop current app iteration 00:05:53.416 Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 reinitialization... 00:05:53.416 spdk_app_start is called in Round 1. 00:05:53.416 Shutdown signal received, stop current app iteration 00:05:53.416 Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 reinitialization... 00:05:53.416 spdk_app_start is called in Round 2. 00:05:53.416 Shutdown signal received, stop current app iteration 00:05:53.416 Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 reinitialization... 00:05:53.416 spdk_app_start is called in Round 3. 00:05:53.416 Shutdown signal received, stop current app iteration 00:05:53.416 16:00:29 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:53.416 16:00:29 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:53.416 00:05:53.416 real 0m15.836s 00:05:53.416 user 0m34.813s 00:05:53.416 sys 0m2.266s 00:05:53.416 16:00:29 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.416 16:00:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.416 ************************************ 00:05:53.416 END TEST app_repeat 00:05:53.416 ************************************ 00:05:53.416 16:00:29 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:53.416 16:00:29 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:53.416 16:00:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.416 16:00:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.416 16:00:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.416 ************************************ 00:05:53.416 START TEST cpu_locks 00:05:53.416 ************************************ 00:05:53.416 16:00:29 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:53.416 * Looking for test storage... 00:05:53.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:53.416 16:00:29 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.416 16:00:29 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.416 16:00:29 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.678 16:00:29 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.678 16:00:29 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:53.678 16:00:29 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.678 16:00:29 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.678 --rc genhtml_branch_coverage=1 00:05:53.678 --rc genhtml_function_coverage=1 00:05:53.678 --rc genhtml_legend=1 00:05:53.678 --rc geninfo_all_blocks=1 00:05:53.678 --rc geninfo_unexecuted_blocks=1 00:05:53.678 00:05:53.678 ' 00:05:53.678 16:00:29 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.678 --rc genhtml_branch_coverage=1 00:05:53.678 --rc genhtml_function_coverage=1 00:05:53.678 --rc genhtml_legend=1 00:05:53.678 --rc geninfo_all_blocks=1 00:05:53.678 --rc geninfo_unexecuted_blocks=1 00:05:53.678 00:05:53.678 ' 00:05:53.678 16:00:29 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.678 --rc genhtml_branch_coverage=1 00:05:53.678 --rc genhtml_function_coverage=1 00:05:53.678 --rc genhtml_legend=1 00:05:53.678 --rc geninfo_all_blocks=1 00:05:53.678 --rc geninfo_unexecuted_blocks=1 00:05:53.678 00:05:53.678 ' 00:05:53.678 16:00:29 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.678 --rc genhtml_branch_coverage=1 00:05:53.678 --rc genhtml_function_coverage=1 00:05:53.678 --rc genhtml_legend=1 00:05:53.678 --rc geninfo_all_blocks=1 00:05:53.678 --rc geninfo_unexecuted_blocks=1 00:05:53.678 00:05:53.678 ' 00:05:53.678 16:00:29 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:53.678 16:00:29 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:53.678 16:00:29 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:53.678 16:00:29 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:53.678 16:00:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.678 16:00:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.678 16:00:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.678 ************************************ 00:05:53.678 START TEST default_locks 00:05:53.678 ************************************ 00:05:53.678 16:00:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:53.678 16:00:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1054057 00:05:53.678 16:00:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1054057 00:05:53.678 16:00:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.678 16:00:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1054057 ']' 00:05:53.678 16:00:29 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.678 16:00:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.678 16:00:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.678 16:00:29 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.678 16:00:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.678 [2024-11-20 16:00:29.500704] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:05:53.678 [2024-11-20 16:00:29.500774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1054057 ] 00:05:53.678 [2024-11-20 16:00:29.589197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.940 [2024-11-20 16:00:29.623649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.512 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.512 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:54.512 16:00:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1054057 00:05:54.512 16:00:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1054057 00:05:54.512 16:00:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.081 lslocks: write error 00:05:55.081 16:00:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1054057 00:05:55.081 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1054057 ']' 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1054057 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1054057 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1054057' 00:05:55.082 killing process with pid 1054057 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1054057 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1054057 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1054057 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1054057 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1054057 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1054057 ']' 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1054057) - No such process 00:05:55.082 ERROR: process (pid: 1054057) is no longer running 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:55.082 00:05:55.082 real 0m1.554s 00:05:55.082 user 0m1.657s 00:05:55.082 sys 0m0.553s 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.082 16:00:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.082 ************************************ 00:05:55.082 END TEST default_locks 00:05:55.082 ************************************ 00:05:55.343 16:00:31 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:55.343 16:00:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.343 16:00:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.343 16:00:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.343 ************************************ 00:05:55.343 START TEST default_locks_via_rpc 00:05:55.343 ************************************ 00:05:55.343 16:00:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:55.343 16:00:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1054387 00:05:55.343 16:00:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1054387 00:05:55.343 16:00:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.343 16:00:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1054387 ']' 00:05:55.343 16:00:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.343 16:00:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.343 16:00:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.343 16:00:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.343 16:00:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.343 [2024-11-20 16:00:31.131008] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:05:55.343 [2024-11-20 16:00:31.131071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1054387 ] 00:05:55.343 [2024-11-20 16:00:31.219997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.343 [2024-11-20 16:00:31.263969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.285 16:00:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.285 16:00:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:56.285 16:00:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:56.285 16:00:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.285 16:00:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.285 16:00:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.285 16:00:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:56.285 16:00:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:56.285 16:00:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:56.285 16:00:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:56.285 16:00:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:56.285 16:00:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.285 16:00:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.285 16:00:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.285 16:00:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1054387 00:05:56.285 16:00:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1054387 00:05:56.285 16:00:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.546 16:00:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1054387 00:05:56.546 16:00:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1054387 ']' 00:05:56.546 16:00:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1054387 00:05:56.546 16:00:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:56.546 16:00:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.546 16:00:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1054387 00:05:56.546 16:00:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.546 16:00:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.546 16:00:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1054387' 00:05:56.546 killing process with pid 1054387 00:05:56.546 16:00:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1054387 00:05:56.546 16:00:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1054387 00:05:56.807 00:05:56.807 real 0m1.430s 00:05:56.807 user 0m1.544s 00:05:56.807 sys 0m0.502s 00:05:56.807 16:00:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.807 16:00:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.807 ************************************ 00:05:56.807 END TEST default_locks_via_rpc 00:05:56.807 ************************************ 00:05:56.807 16:00:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:56.807 16:00:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.807 16:00:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.808 16:00:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.808 ************************************ 00:05:56.808 START TEST non_locking_app_on_locked_coremask 00:05:56.808 ************************************ 00:05:56.808 16:00:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:56.808 16:00:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1054666 00:05:56.808 16:00:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1054666 /var/tmp/spdk.sock 00:05:56.808 16:00:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.808 16:00:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1054666 ']' 00:05:56.808 16:00:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.808 16:00:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.808 16:00:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.808 16:00:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.808 16:00:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.808 [2024-11-20 16:00:32.632751] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:05:56.808 [2024-11-20 16:00:32.632808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1054666 ] 00:05:56.808 [2024-11-20 16:00:32.719838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.069 [2024-11-20 16:00:32.753280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.639 16:00:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.639 16:00:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:57.639 16:00:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1054972 00:05:57.639 16:00:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1054972 /var/tmp/spdk2.sock 00:05:57.639 16:00:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1054972 ']' 00:05:57.639 16:00:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:57.639 16:00:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.639 16:00:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.639 16:00:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.639 16:00:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.639 16:00:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.640 [2024-11-20 16:00:33.473186] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:05:57.640 [2024-11-20 16:00:33.473240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1054972 ] 00:05:57.640 [2024-11-20 16:00:33.560675] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.640 [2024-11-20 16:00:33.560699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.899 [2024-11-20 16:00:33.619137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.470 16:00:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.470 16:00:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:58.470 16:00:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1054666 00:05:58.470 16:00:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1054666 00:05:58.470 16:00:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.041 lslocks: write error 00:05:59.041 16:00:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1054666 00:05:59.041 16:00:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1054666 ']' 00:05:59.041 16:00:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1054666 00:05:59.041 16:00:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:59.041 16:00:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.041 16:00:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1054666 00:05:59.301 16:00:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.301 16:00:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.301 16:00:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1054666' 00:05:59.301 killing process with pid 1054666 00:05:59.301 16:00:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1054666 00:05:59.301 16:00:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1054666 00:05:59.562 16:00:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1054972 00:05:59.562 16:00:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1054972 ']' 00:05:59.562 16:00:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1054972 00:05:59.562 16:00:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:59.562 16:00:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.562 16:00:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1054972 00:05:59.562 16:00:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.562 16:00:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.562 16:00:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1054972' 00:05:59.562 killing process with pid 1054972 00:05:59.562 16:00:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1054972 00:05:59.562 16:00:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1054972 00:05:59.824 00:05:59.824 real 0m3.033s 00:05:59.824 user 0m3.368s 00:05:59.824 sys 0m0.921s 00:05:59.824 16:00:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.824 16:00:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.824 ************************************ 00:05:59.824 END TEST non_locking_app_on_locked_coremask 00:05:59.824 ************************************ 00:05:59.824 16:00:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:59.824 16:00:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.824 16:00:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.824 16:00:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.824 ************************************ 00:05:59.824 START TEST locking_app_on_unlocked_coremask 00:05:59.824 ************************************ 00:05:59.824 16:00:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:59.824 16:00:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1055354 00:05:59.824 16:00:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1055354 /var/tmp/spdk.sock 00:05:59.824 16:00:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:59.824 16:00:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1055354 ']' 00:05:59.824 16:00:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.824 16:00:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.824 16:00:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.824 16:00:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.824 16:00:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.824 [2024-11-20 16:00:35.741865] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:05:59.824 [2024-11-20 16:00:35.741918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1055354 ] 00:06:00.085 [2024-11-20 16:00:35.824956] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.085 [2024-11-20 16:00:35.824979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.085 [2024-11-20 16:00:35.857395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.657 16:00:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.657 16:00:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:00.657 16:00:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1055663 00:06:00.657 16:00:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1055663 /var/tmp/spdk2.sock 00:06:00.657 16:00:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1055663 ']' 00:06:00.657 16:00:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:00.657 16:00:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.657 16:00:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.657 16:00:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.657 16:00:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.657 16:00:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.657 [2024-11-20 16:00:36.582941] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:06:00.657 [2024-11-20 16:00:36.582997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1055663 ] 00:06:00.918 [2024-11-20 16:00:36.667073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.918 [2024-11-20 16:00:36.729491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.487 16:00:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.487 16:00:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:01.487 16:00:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1055663 00:06:01.487 16:00:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1055663 00:06:01.488 16:00:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.057 lslocks: write error 00:06:02.057 16:00:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1055354 00:06:02.057 16:00:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1055354 ']' 00:06:02.057 16:00:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1055354 00:06:02.057 16:00:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:02.057 16:00:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.057 16:00:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1055354 00:06:02.057 16:00:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.057 16:00:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.057 16:00:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1055354' 00:06:02.057 killing process with pid 1055354 00:06:02.057 16:00:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1055354 00:06:02.057 16:00:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1055354 00:06:02.316 16:00:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1055663 00:06:02.316 16:00:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1055663 ']' 00:06:02.316 16:00:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1055663 00:06:02.316 16:00:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:02.316 16:00:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.316 16:00:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1055663 00:06:02.317 16:00:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.317 16:00:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.317 16:00:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1055663' 00:06:02.317 killing process with pid 1055663 00:06:02.317 16:00:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1055663 00:06:02.317 16:00:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1055663 00:06:02.577 00:06:02.577 real 0m2.683s 00:06:02.577 user 0m3.012s 00:06:02.577 sys 0m0.796s 00:06:02.577 16:00:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.577 16:00:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.577 ************************************ 00:06:02.577 END TEST locking_app_on_unlocked_coremask 00:06:02.577 ************************************ 00:06:02.577 16:00:38 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:02.577 16:00:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.577 16:00:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.577 16:00:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.577 ************************************ 00:06:02.577 START TEST locking_app_on_locked_coremask 00:06:02.577 ************************************ 00:06:02.577 16:00:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:02.577 16:00:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1056055 00:06:02.577 16:00:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1056055 /var/tmp/spdk.sock 00:06:02.577 16:00:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.577 16:00:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1056055 ']' 00:06:02.577 16:00:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.577 16:00:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.577 16:00:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.577 16:00:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.577 16:00:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.577 [2024-11-20 16:00:38.498858] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:06:02.577 [2024-11-20 16:00:38.498911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056055 ] 00:06:02.837 [2024-11-20 16:00:38.582479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.837 [2024-11-20 16:00:38.614489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.412 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.412 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:03.412 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1056082 00:06:03.412 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1056082 /var/tmp/spdk2.sock 00:06:03.412 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:03.412 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:03.412 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1056082 /var/tmp/spdk2.sock 00:06:03.412 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:03.412 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.412 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:03.412 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.412 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1056082 /var/tmp/spdk2.sock 00:06:03.412 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1056082 ']' 00:06:03.412 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.412 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.412 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.412 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.412 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.412 [2024-11-20 16:00:39.341052] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:06:03.412 [2024-11-20 16:00:39.341106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056082 ] 00:06:03.675 [2024-11-20 16:00:39.428793] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1056055 has claimed it. 00:06:03.675 [2024-11-20 16:00:39.428827] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:04.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1056082) - No such process 00:06:04.246 ERROR: process (pid: 1056082) is no longer running 00:06:04.246 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.246 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:04.246 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:04.246 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.246 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:04.246 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.246 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1056055 00:06:04.246 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1056055 00:06:04.246 16:00:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.817 lslocks: write error 00:06:04.817 16:00:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1056055 00:06:04.817 16:00:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1056055 ']' 00:06:04.817 16:00:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1056055 00:06:04.817 16:00:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:04.817 16:00:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.817 16:00:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1056055 00:06:04.817 16:00:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.817 16:00:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.817 16:00:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1056055' 00:06:04.817 killing process with pid 1056055 00:06:04.817 16:00:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1056055 00:06:04.817 16:00:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1056055 00:06:04.817 00:06:04.817 real 0m2.275s 00:06:04.817 user 0m2.559s 00:06:04.817 sys 0m0.652s 00:06:04.817 16:00:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.817 16:00:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.817 ************************************ 00:06:04.817 END TEST locking_app_on_locked_coremask 00:06:04.817 ************************************ 00:06:05.077 16:00:40 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:05.077 16:00:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.077 16:00:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.078 16:00:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.078 ************************************ 00:06:05.078 START TEST locking_overlapped_coremask 00:06:05.078 ************************************ 00:06:05.078 16:00:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:05.078 16:00:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1056436 00:06:05.078 16:00:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1056436 /var/tmp/spdk.sock 00:06:05.078 16:00:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:05.078 16:00:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1056436 ']' 00:06:05.078 16:00:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.078 16:00:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.078 16:00:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.078 16:00:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.078 16:00:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.078 [2024-11-20 16:00:40.853168] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:06:05.078 [2024-11-20 16:00:40.853229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056436 ] 00:06:05.078 [2024-11-20 16:00:40.938757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.078 [2024-11-20 16:00:40.975785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.078 [2024-11-20 16:00:40.975937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.078 [2024-11-20 16:00:40.975938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.019 16:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.019 16:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:06.019 16:00:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1056764 00:06:06.019 16:00:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1056764 /var/tmp/spdk2.sock 00:06:06.019 16:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:06.019 16:00:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:06.019 16:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1056764 /var/tmp/spdk2.sock 00:06:06.019 16:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:06.019 16:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.019 16:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:06.019 16:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.019 16:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1056764 /var/tmp/spdk2.sock 00:06:06.019 16:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1056764 ']' 00:06:06.019 16:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.019 16:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.019 16:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.019 16:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.019 16:00:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.019 [2024-11-20 16:00:41.710445] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:06:06.019 [2024-11-20 16:00:41.710497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056764 ] 00:06:06.019 [2024-11-20 16:00:41.824533] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1056436 has claimed it. 00:06:06.019 [2024-11-20 16:00:41.824575] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:06.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1056764) - No such process 00:06:06.589 ERROR: process (pid: 1056764) is no longer running 00:06:06.589 16:00:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.590 16:00:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:06.590 16:00:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:06.590 16:00:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.590 16:00:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:06.590 16:00:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.590 16:00:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:06.590 16:00:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:06.590 16:00:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:06.590 16:00:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:06.590 16:00:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1056436 00:06:06.590 16:00:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1056436 ']' 00:06:06.590 16:00:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1056436 00:06:06.590 16:00:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:06.590 16:00:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.590 16:00:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1056436 00:06:06.590 16:00:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.590 16:00:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.590 16:00:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1056436' 00:06:06.590 killing process with pid 1056436 00:06:06.590 16:00:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1056436 00:06:06.590 16:00:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1056436 00:06:06.851 00:06:06.851 real 0m1.783s 00:06:06.851 user 0m5.155s 00:06:06.851 sys 0m0.389s 00:06:06.851 16:00:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.851 16:00:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.851 ************************************ 00:06:06.851 END TEST locking_overlapped_coremask 00:06:06.851 ************************************ 00:06:06.851 16:00:42 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:06.851 16:00:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.851 16:00:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.851 16:00:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.851 ************************************ 00:06:06.851 START TEST locking_overlapped_coremask_via_rpc 00:06:06.851 ************************************ 00:06:06.851 16:00:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:06.851 16:00:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1056812 00:06:06.851 16:00:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1056812 /var/tmp/spdk.sock 00:06:06.851 16:00:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:06.851 16:00:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1056812 ']' 00:06:06.851 16:00:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.851 16:00:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.851 16:00:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.851 16:00:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.851 16:00:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.852 [2024-11-20 16:00:42.712036] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:06:06.852 [2024-11-20 16:00:42.712090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056812 ] 00:06:07.112 [2024-11-20 16:00:42.795503] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.112 [2024-11-20 16:00:42.795537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.112 [2024-11-20 16:00:42.834898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.112 [2024-11-20 16:00:42.835050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.112 [2024-11-20 16:00:42.835050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.683 16:00:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.683 16:00:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:07.683 16:00:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1057144 00:06:07.683 16:00:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1057144 /var/tmp/spdk2.sock 00:06:07.683 16:00:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1057144 ']' 00:06:07.683 16:00:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:07.683 16:00:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.683 16:00:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.683 16:00:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.683 16:00:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.683 16:00:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.683 [2024-11-20 16:00:43.575240] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:06:07.683 [2024-11-20 16:00:43.575296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1057144 ] 00:06:07.945 [2024-11-20 16:00:43.688061] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.945 [2024-11-20 16:00:43.688093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.945 [2024-11-20 16:00:43.766045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.945 [2024-11-20 16:00:43.766217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.945 [2024-11-20 16:00:43.766218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.516 [2024-11-20 16:00:44.371237] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1056812 has claimed it. 00:06:08.516 request: 00:06:08.516 { 00:06:08.516 "method": "framework_enable_cpumask_locks", 00:06:08.516 "req_id": 1 00:06:08.516 } 00:06:08.516 Got JSON-RPC error response 00:06:08.516 response: 00:06:08.516 { 00:06:08.516 "code": -32603, 00:06:08.516 "message": "Failed to claim CPU core: 2" 00:06:08.516 } 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1056812 /var/tmp/spdk.sock 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1056812 ']' 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.516 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.776 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.776 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:08.776 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1057144 /var/tmp/spdk2.sock 00:06:08.776 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1057144 ']' 00:06:08.776 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.776 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.776 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.776 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.776 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.037 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.037 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:09.037 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:09.037 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:09.037 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:09.037 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:09.037 00:06:09.037 real 0m2.092s 00:06:09.037 user 0m0.858s 00:06:09.037 sys 0m0.163s 00:06:09.037 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.037 16:00:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.037 ************************************ 00:06:09.037 END TEST locking_overlapped_coremask_via_rpc 00:06:09.037 ************************************ 00:06:09.037 16:00:44 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:09.037 16:00:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1056812 ]] 00:06:09.037 16:00:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1056812 00:06:09.037 16:00:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1056812 ']' 00:06:09.037 16:00:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1056812 00:06:09.037 16:00:44 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:09.037 16:00:44 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.037 16:00:44 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1056812 00:06:09.037 16:00:44 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.037 16:00:44 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.037 16:00:44 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1056812' 00:06:09.037 killing process with pid 1056812 00:06:09.037 16:00:44 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1056812 00:06:09.037 16:00:44 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1056812 00:06:09.298 16:00:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1057144 ]] 00:06:09.298 16:00:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1057144 00:06:09.298 16:00:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1057144 ']' 00:06:09.298 16:00:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1057144 00:06:09.298 16:00:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:09.298 16:00:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.298 16:00:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1057144 00:06:09.298 16:00:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:09.298 16:00:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:09.298 16:00:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1057144' 00:06:09.298 killing process with pid 1057144 00:06:09.298 16:00:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1057144 00:06:09.298 16:00:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1057144 00:06:09.559 16:00:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:09.559 16:00:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:09.559 16:00:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1056812 ]] 00:06:09.559 16:00:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1056812 00:06:09.559 16:00:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1056812 ']' 00:06:09.559 16:00:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1056812 00:06:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1056812) - No such process 00:06:09.559 16:00:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1056812 is not found' 00:06:09.559 Process with pid 1056812 is not found 00:06:09.559 16:00:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1057144 ]] 00:06:09.559 16:00:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1057144 00:06:09.559 16:00:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1057144 ']' 00:06:09.559 16:00:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1057144 00:06:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1057144) - No such process 00:06:09.559 16:00:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1057144 is not found' 00:06:09.559 Process with pid 1057144 is not found 00:06:09.559 16:00:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:09.559 00:06:09.559 real 0m16.115s 00:06:09.559 user 0m28.223s 00:06:09.559 sys 0m4.907s 00:06:09.559 16:00:45 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.559 16:00:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.559 ************************************ 00:06:09.559 END TEST cpu_locks 00:06:09.559 ************************************ 00:06:09.559 00:06:09.559 real 0m41.987s 00:06:09.559 user 1m22.475s 00:06:09.559 sys 0m8.281s 00:06:09.559 16:00:45 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.559 16:00:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.559 ************************************ 00:06:09.559 END TEST event 00:06:09.559 ************************************ 00:06:09.559 16:00:45 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:09.559 16:00:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.559 16:00:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.559 16:00:45 -- common/autotest_common.sh@10 -- # set +x 00:06:09.559 ************************************ 00:06:09.559 START TEST thread 00:06:09.559 ************************************ 00:06:09.559 16:00:45 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:09.821 * Looking for test storage... 00:06:09.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:09.821 16:00:45 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:09.821 16:00:45 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:09.821 16:00:45 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.821 16:00:45 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.821 16:00:45 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.821 16:00:45 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.821 16:00:45 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.821 16:00:45 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.821 16:00:45 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.821 16:00:45 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.821 16:00:45 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.821 16:00:45 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.821 16:00:45 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.821 16:00:45 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.821 16:00:45 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.821 16:00:45 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:09.821 16:00:45 thread -- scripts/common.sh@345 -- # : 1 00:06:09.821 16:00:45 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.821 16:00:45 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.821 16:00:45 thread -- scripts/common.sh@365 -- # decimal 1 00:06:09.821 16:00:45 thread -- scripts/common.sh@353 -- # local d=1 00:06:09.821 16:00:45 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.821 16:00:45 thread -- scripts/common.sh@355 -- # echo 1 00:06:09.821 16:00:45 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.821 16:00:45 thread -- scripts/common.sh@366 -- # decimal 2 00:06:09.821 16:00:45 thread -- scripts/common.sh@353 -- # local d=2 00:06:09.821 16:00:45 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.821 16:00:45 thread -- scripts/common.sh@355 -- # echo 2 00:06:09.821 16:00:45 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.821 16:00:45 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.821 16:00:45 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.821 16:00:45 thread -- scripts/common.sh@368 -- # return 0 00:06:09.821 16:00:45 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.821 16:00:45 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.821 --rc genhtml_branch_coverage=1 00:06:09.821 --rc genhtml_function_coverage=1 00:06:09.821 --rc genhtml_legend=1 00:06:09.821 --rc geninfo_all_blocks=1 00:06:09.821 --rc geninfo_unexecuted_blocks=1 00:06:09.821 00:06:09.821 ' 00:06:09.821 16:00:45 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.822 --rc genhtml_branch_coverage=1 00:06:09.822 --rc genhtml_function_coverage=1 00:06:09.822 --rc genhtml_legend=1 00:06:09.822 --rc geninfo_all_blocks=1 00:06:09.822 --rc geninfo_unexecuted_blocks=1 00:06:09.822 00:06:09.822 ' 00:06:09.822 16:00:45 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.822 --rc genhtml_branch_coverage=1 00:06:09.822 --rc genhtml_function_coverage=1 00:06:09.822 --rc genhtml_legend=1 00:06:09.822 --rc geninfo_all_blocks=1 00:06:09.822 --rc geninfo_unexecuted_blocks=1 00:06:09.822 00:06:09.822 ' 00:06:09.822 16:00:45 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.822 --rc genhtml_branch_coverage=1 00:06:09.822 --rc genhtml_function_coverage=1 00:06:09.822 --rc genhtml_legend=1 00:06:09.822 --rc geninfo_all_blocks=1 00:06:09.822 --rc geninfo_unexecuted_blocks=1 00:06:09.822 00:06:09.822 ' 00:06:09.822 16:00:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:09.822 16:00:45 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:09.822 16:00:45 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.822 16:00:45 thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.822 ************************************ 00:06:09.822 START TEST thread_poller_perf 00:06:09.822 ************************************ 00:06:09.822 16:00:45 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:09.822 [2024-11-20 16:00:45.678853] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:06:09.822 [2024-11-20 16:00:45.678968] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1057588 ] 00:06:10.082 [2024-11-20 16:00:45.766504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.082 [2024-11-20 16:00:45.805546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.082 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:11.024 [2024-11-20T15:00:46.960Z] ====================================== 00:06:11.024 [2024-11-20T15:00:46.960Z] busy:2410008328 (cyc) 00:06:11.024 [2024-11-20T15:00:46.960Z] total_run_count: 418000 00:06:11.024 [2024-11-20T15:00:46.960Z] tsc_hz: 2400000000 (cyc) 00:06:11.024 [2024-11-20T15:00:46.960Z] ====================================== 00:06:11.024 [2024-11-20T15:00:46.960Z] poller_cost: 5765 (cyc), 2402 (nsec) 00:06:11.024 00:06:11.024 real 0m1.182s 00:06:11.024 user 0m1.100s 00:06:11.024 sys 0m0.077s 00:06:11.024 16:00:46 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.024 16:00:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.024 ************************************ 00:06:11.024 END TEST thread_poller_perf 00:06:11.024 ************************************ 00:06:11.024 16:00:46 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:11.024 16:00:46 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:11.024 16:00:46 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.024 16:00:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.024 ************************************ 00:06:11.024 START TEST thread_poller_perf 00:06:11.024 ************************************ 00:06:11.024 16:00:46 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:11.024 [2024-11-20 16:00:46.933029] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:06:11.024 [2024-11-20 16:00:46.933128] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1057939 ] 00:06:11.285 [2024-11-20 16:00:47.020655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.285 [2024-11-20 16:00:47.054442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.285 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:12.226 [2024-11-20T15:00:48.162Z] ====================================== 00:06:12.226 [2024-11-20T15:00:48.162Z] busy:2401252250 (cyc) 00:06:12.226 [2024-11-20T15:00:48.162Z] total_run_count: 5564000 00:06:12.226 [2024-11-20T15:00:48.162Z] tsc_hz: 2400000000 (cyc) 00:06:12.226 [2024-11-20T15:00:48.162Z] ====================================== 00:06:12.226 [2024-11-20T15:00:48.162Z] poller_cost: 431 (cyc), 179 (nsec) 00:06:12.226 00:06:12.226 real 0m1.170s 00:06:12.226 user 0m1.085s 00:06:12.226 sys 0m0.082s 00:06:12.226 16:00:48 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.226 16:00:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.226 ************************************ 00:06:12.226 END TEST thread_poller_perf 00:06:12.226 ************************************ 00:06:12.226 16:00:48 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:12.226 00:06:12.226 real 0m2.692s 00:06:12.226 user 0m2.368s 00:06:12.226 sys 0m0.337s 00:06:12.226 16:00:48 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.226 16:00:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.226 ************************************ 00:06:12.226 END TEST thread 00:06:12.226 ************************************ 00:06:12.226 16:00:48 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:12.226 16:00:48 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:12.226 16:00:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.226 16:00:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.226 16:00:48 -- common/autotest_common.sh@10 -- # set +x 00:06:12.487 ************************************ 00:06:12.487 START TEST app_cmdline 00:06:12.487 ************************************ 00:06:12.487 16:00:48 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:12.487 * Looking for test storage... 00:06:12.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:12.487 16:00:48 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.487 16:00:48 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.487 16:00:48 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.488 16:00:48 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.488 16:00:48 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:12.488 16:00:48 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.488 16:00:48 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.488 --rc genhtml_branch_coverage=1 00:06:12.488 --rc genhtml_function_coverage=1 00:06:12.488 --rc genhtml_legend=1 00:06:12.488 --rc geninfo_all_blocks=1 00:06:12.488 --rc geninfo_unexecuted_blocks=1 00:06:12.488 00:06:12.488 ' 00:06:12.488 16:00:48 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.488 --rc genhtml_branch_coverage=1 00:06:12.488 --rc genhtml_function_coverage=1 00:06:12.488 --rc genhtml_legend=1 00:06:12.488 --rc geninfo_all_blocks=1 00:06:12.488 --rc geninfo_unexecuted_blocks=1 00:06:12.488 00:06:12.488 ' 00:06:12.488 16:00:48 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.488 --rc genhtml_branch_coverage=1 00:06:12.488 --rc genhtml_function_coverage=1 00:06:12.488 --rc genhtml_legend=1 00:06:12.488 --rc geninfo_all_blocks=1 00:06:12.488 --rc geninfo_unexecuted_blocks=1 00:06:12.488 00:06:12.488 ' 00:06:12.488 16:00:48 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.488 --rc genhtml_branch_coverage=1 00:06:12.488 --rc genhtml_function_coverage=1 00:06:12.488 --rc genhtml_legend=1 00:06:12.488 --rc geninfo_all_blocks=1 00:06:12.488 --rc geninfo_unexecuted_blocks=1 00:06:12.488 00:06:12.488 ' 00:06:12.488 16:00:48 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:12.488 16:00:48 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1058318 00:06:12.488 16:00:48 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1058318 00:06:12.488 16:00:48 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1058318 ']' 00:06:12.488 16:00:48 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:12.488 16:00:48 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.488 16:00:48 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.488 16:00:48 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.488 16:00:48 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.488 16:00:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:12.749 [2024-11-20 16:00:48.463445] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:06:12.749 [2024-11-20 16:00:48.463523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1058318 ] 00:06:12.749 [2024-11-20 16:00:48.548651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.749 [2024-11-20 16:00:48.583515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.322 16:00:49 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.322 16:00:49 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:13.322 16:00:49 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:13.583 { 00:06:13.583 "version": "SPDK v25.01-pre git sha1 d3dfde872", 00:06:13.583 "fields": { 00:06:13.583 "major": 25, 00:06:13.583 "minor": 1, 00:06:13.583 "patch": 0, 00:06:13.583 "suffix": "-pre", 00:06:13.583 "commit": "d3dfde872" 00:06:13.583 } 00:06:13.583 } 00:06:13.583 16:00:49 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:13.583 16:00:49 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:13.583 16:00:49 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:13.583 16:00:49 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:13.583 16:00:49 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:13.583 16:00:49 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.583 16:00:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:13.583 16:00:49 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:13.583 16:00:49 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:13.583 16:00:49 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.583 16:00:49 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:13.583 16:00:49 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:13.583 16:00:49 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.583 16:00:49 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:13.583 16:00:49 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.583 16:00:49 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:13.583 16:00:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.583 16:00:49 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:13.583 16:00:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.583 16:00:49 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:13.583 16:00:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.583 16:00:49 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:13.583 16:00:49 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:13.583 16:00:49 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.843 request: 00:06:13.843 { 00:06:13.843 "method": "env_dpdk_get_mem_stats", 00:06:13.843 "req_id": 1 00:06:13.843 } 00:06:13.843 Got JSON-RPC error response 00:06:13.843 response: 00:06:13.843 { 00:06:13.843 "code": -32601, 00:06:13.843 "message": "Method not found" 00:06:13.843 } 00:06:13.843 16:00:49 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:13.843 16:00:49 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.843 16:00:49 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.843 16:00:49 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.843 16:00:49 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1058318 00:06:13.843 16:00:49 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1058318 ']' 00:06:13.843 16:00:49 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1058318 00:06:13.843 16:00:49 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:13.843 16:00:49 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.843 16:00:49 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1058318 00:06:13.843 16:00:49 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.843 16:00:49 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.843 16:00:49 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1058318' 00:06:13.843 killing process with pid 1058318 00:06:13.843 16:00:49 app_cmdline -- common/autotest_common.sh@973 -- # kill 1058318 00:06:13.843 16:00:49 app_cmdline -- common/autotest_common.sh@978 -- # wait 1058318 00:06:14.103 00:06:14.103 real 0m1.701s 00:06:14.103 user 0m2.043s 00:06:14.103 sys 0m0.449s 00:06:14.103 16:00:49 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.103 16:00:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:14.103 ************************************ 00:06:14.103 END TEST app_cmdline 00:06:14.103 ************************************ 00:06:14.103 16:00:49 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:14.103 16:00:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.103 16:00:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.103 16:00:49 -- common/autotest_common.sh@10 -- # set +x 00:06:14.103 ************************************ 00:06:14.103 START TEST version 00:06:14.103 ************************************ 00:06:14.103 16:00:49 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:14.365 * Looking for test storage... 00:06:14.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:14.365 16:00:50 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:14.365 16:00:50 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:14.365 16:00:50 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:14.365 16:00:50 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:14.365 16:00:50 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.365 16:00:50 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.365 16:00:50 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.365 16:00:50 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.365 16:00:50 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.365 16:00:50 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.365 16:00:50 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.365 16:00:50 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.365 16:00:50 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.365 16:00:50 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.365 16:00:50 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.365 16:00:50 version -- scripts/common.sh@344 -- # case "$op" in 00:06:14.365 16:00:50 version -- scripts/common.sh@345 -- # : 1 00:06:14.365 16:00:50 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.365 16:00:50 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.365 16:00:50 version -- scripts/common.sh@365 -- # decimal 1 00:06:14.365 16:00:50 version -- scripts/common.sh@353 -- # local d=1 00:06:14.365 16:00:50 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.365 16:00:50 version -- scripts/common.sh@355 -- # echo 1 00:06:14.365 16:00:50 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.365 16:00:50 version -- scripts/common.sh@366 -- # decimal 2 00:06:14.365 16:00:50 version -- scripts/common.sh@353 -- # local d=2 00:06:14.365 16:00:50 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.365 16:00:50 version -- scripts/common.sh@355 -- # echo 2 00:06:14.365 16:00:50 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.365 16:00:50 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.365 16:00:50 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.365 16:00:50 version -- scripts/common.sh@368 -- # return 0 00:06:14.365 16:00:50 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.365 16:00:50 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:14.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.365 --rc genhtml_branch_coverage=1 00:06:14.365 --rc genhtml_function_coverage=1 00:06:14.365 --rc genhtml_legend=1 00:06:14.365 --rc geninfo_all_blocks=1 00:06:14.365 --rc geninfo_unexecuted_blocks=1 00:06:14.365 00:06:14.365 ' 00:06:14.365 16:00:50 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:14.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.365 --rc genhtml_branch_coverage=1 00:06:14.365 --rc genhtml_function_coverage=1 00:06:14.365 --rc genhtml_legend=1 00:06:14.365 --rc geninfo_all_blocks=1 00:06:14.365 --rc geninfo_unexecuted_blocks=1 00:06:14.365 00:06:14.365 ' 00:06:14.365 16:00:50 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:14.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.365 --rc genhtml_branch_coverage=1 00:06:14.365 --rc genhtml_function_coverage=1 00:06:14.365 --rc genhtml_legend=1 00:06:14.365 --rc geninfo_all_blocks=1 00:06:14.365 --rc geninfo_unexecuted_blocks=1 00:06:14.365 00:06:14.365 ' 00:06:14.365 16:00:50 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:14.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.365 --rc genhtml_branch_coverage=1 00:06:14.365 --rc genhtml_function_coverage=1 00:06:14.365 --rc genhtml_legend=1 00:06:14.365 --rc geninfo_all_blocks=1 00:06:14.365 --rc geninfo_unexecuted_blocks=1 00:06:14.365 00:06:14.365 ' 00:06:14.365 16:00:50 version -- app/version.sh@17 -- # get_header_version major 00:06:14.365 16:00:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:14.365 16:00:50 version -- app/version.sh@14 -- # cut -f2 00:06:14.365 16:00:50 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.365 16:00:50 version -- app/version.sh@17 -- # major=25 00:06:14.365 16:00:50 version -- app/version.sh@18 -- # get_header_version minor 00:06:14.365 16:00:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:14.365 16:00:50 version -- app/version.sh@14 -- # cut -f2 00:06:14.365 16:00:50 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.365 16:00:50 version -- app/version.sh@18 -- # minor=1 00:06:14.365 16:00:50 version -- app/version.sh@19 -- # get_header_version patch 00:06:14.365 16:00:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:14.365 16:00:50 version -- app/version.sh@14 -- # cut -f2 00:06:14.365 16:00:50 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.365 16:00:50 version -- app/version.sh@19 -- # patch=0 00:06:14.365 16:00:50 version -- app/version.sh@20 -- # get_header_version suffix 00:06:14.365 16:00:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:14.365 16:00:50 version -- app/version.sh@14 -- # cut -f2 00:06:14.365 16:00:50 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.365 16:00:50 version -- app/version.sh@20 -- # suffix=-pre 00:06:14.365 16:00:50 version -- app/version.sh@22 -- # version=25.1 00:06:14.365 16:00:50 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:14.365 16:00:50 version -- app/version.sh@28 -- # version=25.1rc0 00:06:14.365 16:00:50 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:14.365 16:00:50 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:14.365 16:00:50 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:14.365 16:00:50 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:14.365 00:06:14.365 real 0m0.283s 00:06:14.365 user 0m0.166s 00:06:14.365 sys 0m0.165s 00:06:14.365 16:00:50 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.365 16:00:50 version -- common/autotest_common.sh@10 -- # set +x 00:06:14.365 ************************************ 00:06:14.365 END TEST version 00:06:14.365 ************************************ 00:06:14.365 16:00:50 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:14.365 16:00:50 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:14.626 16:00:50 -- spdk/autotest.sh@194 -- # uname -s 00:06:14.626 16:00:50 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:14.626 16:00:50 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:14.626 16:00:50 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:14.626 16:00:50 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:14.626 16:00:50 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:14.626 16:00:50 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:14.626 16:00:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:14.626 16:00:50 -- common/autotest_common.sh@10 -- # set +x 00:06:14.626 16:00:50 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:14.626 16:00:50 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:14.626 16:00:50 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:14.626 16:00:50 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:14.626 16:00:50 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:14.626 16:00:50 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:14.626 16:00:50 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:14.626 16:00:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:14.626 16:00:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.626 16:00:50 -- common/autotest_common.sh@10 -- # set +x 00:06:14.626 ************************************ 00:06:14.626 START TEST nvmf_tcp 00:06:14.626 ************************************ 00:06:14.626 16:00:50 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:14.626 * Looking for test storage... 00:06:14.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:14.626 16:00:50 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:14.626 16:00:50 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:14.626 16:00:50 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:14.887 16:00:50 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.887 16:00:50 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:14.887 16:00:50 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.887 16:00:50 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:14.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.887 --rc genhtml_branch_coverage=1 00:06:14.887 --rc genhtml_function_coverage=1 00:06:14.887 --rc genhtml_legend=1 00:06:14.887 --rc geninfo_all_blocks=1 00:06:14.887 --rc geninfo_unexecuted_blocks=1 00:06:14.887 00:06:14.887 ' 00:06:14.887 16:00:50 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:14.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.887 --rc genhtml_branch_coverage=1 00:06:14.887 --rc genhtml_function_coverage=1 00:06:14.887 --rc genhtml_legend=1 00:06:14.887 --rc geninfo_all_blocks=1 00:06:14.887 --rc geninfo_unexecuted_blocks=1 00:06:14.887 00:06:14.887 ' 00:06:14.887 16:00:50 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:14.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.887 --rc genhtml_branch_coverage=1 00:06:14.887 --rc genhtml_function_coverage=1 00:06:14.887 --rc genhtml_legend=1 00:06:14.887 --rc geninfo_all_blocks=1 00:06:14.887 --rc geninfo_unexecuted_blocks=1 00:06:14.887 00:06:14.887 ' 00:06:14.887 16:00:50 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:14.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.887 --rc genhtml_branch_coverage=1 00:06:14.887 --rc genhtml_function_coverage=1 00:06:14.887 --rc genhtml_legend=1 00:06:14.887 --rc geninfo_all_blocks=1 00:06:14.887 --rc geninfo_unexecuted_blocks=1 00:06:14.887 00:06:14.887 ' 00:06:14.887 16:00:50 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:14.887 16:00:50 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:14.887 16:00:50 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:14.887 16:00:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:14.887 16:00:50 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.887 16:00:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.887 ************************************ 00:06:14.887 START TEST nvmf_target_core 00:06:14.887 ************************************ 00:06:14.887 16:00:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:14.887 * Looking for test storage... 00:06:14.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:14.887 16:00:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:14.887 16:00:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:14.887 16:00:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:14.887 16:00:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:14.887 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.887 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.887 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.887 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.887 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.887 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.887 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.887 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.887 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.887 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.887 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.887 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:14.887 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:14.888 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.888 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.888 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:14.888 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:14.888 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.888 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:14.888 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.148 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:15.148 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:15.148 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.148 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:15.148 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.148 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.148 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.148 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:15.148 16:00:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.148 16:00:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.148 --rc genhtml_branch_coverage=1 00:06:15.148 --rc genhtml_function_coverage=1 00:06:15.148 --rc genhtml_legend=1 00:06:15.148 --rc geninfo_all_blocks=1 00:06:15.148 --rc geninfo_unexecuted_blocks=1 00:06:15.148 00:06:15.148 ' 00:06:15.148 16:00:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.149 --rc genhtml_branch_coverage=1 00:06:15.149 --rc genhtml_function_coverage=1 00:06:15.149 --rc genhtml_legend=1 00:06:15.149 --rc geninfo_all_blocks=1 00:06:15.149 --rc geninfo_unexecuted_blocks=1 00:06:15.149 00:06:15.149 ' 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.149 --rc genhtml_branch_coverage=1 00:06:15.149 --rc genhtml_function_coverage=1 00:06:15.149 --rc genhtml_legend=1 00:06:15.149 --rc geninfo_all_blocks=1 00:06:15.149 --rc geninfo_unexecuted_blocks=1 00:06:15.149 00:06:15.149 ' 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.149 --rc genhtml_branch_coverage=1 00:06:15.149 --rc genhtml_function_coverage=1 00:06:15.149 --rc genhtml_legend=1 00:06:15.149 --rc geninfo_all_blocks=1 00:06:15.149 --rc geninfo_unexecuted_blocks=1 00:06:15.149 00:06:15.149 ' 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:15.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:15.149 ************************************ 00:06:15.149 START TEST nvmf_abort 00:06:15.149 ************************************ 00:06:15.149 16:00:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:15.149 * Looking for test storage... 00:06:15.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:15.149 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:15.149 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:06:15.149 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.411 --rc genhtml_branch_coverage=1 00:06:15.411 --rc genhtml_function_coverage=1 00:06:15.411 --rc genhtml_legend=1 00:06:15.411 --rc geninfo_all_blocks=1 00:06:15.411 --rc geninfo_unexecuted_blocks=1 00:06:15.411 00:06:15.411 ' 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.411 --rc genhtml_branch_coverage=1 00:06:15.411 --rc genhtml_function_coverage=1 00:06:15.411 --rc genhtml_legend=1 00:06:15.411 --rc geninfo_all_blocks=1 00:06:15.411 --rc geninfo_unexecuted_blocks=1 00:06:15.411 00:06:15.411 ' 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.411 --rc genhtml_branch_coverage=1 00:06:15.411 --rc genhtml_function_coverage=1 00:06:15.411 --rc genhtml_legend=1 00:06:15.411 --rc geninfo_all_blocks=1 00:06:15.411 --rc geninfo_unexecuted_blocks=1 00:06:15.411 00:06:15.411 ' 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.411 --rc genhtml_branch_coverage=1 00:06:15.411 --rc genhtml_function_coverage=1 00:06:15.411 --rc genhtml_legend=1 00:06:15.411 --rc geninfo_all_blocks=1 00:06:15.411 --rc geninfo_unexecuted_blocks=1 00:06:15.411 00:06:15.411 ' 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.411 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:15.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:15.412 16:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:23.556 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:23.556 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:23.556 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:23.557 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:23.557 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:23.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:23.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:06:23.557 00:06:23.557 --- 10.0.0.2 ping statistics --- 00:06:23.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.557 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:23.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:23.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:06:23.557 00:06:23.557 --- 10.0.0.1 ping statistics --- 00:06:23.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.557 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1062715 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1062715 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1062715 ']' 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.557 16:00:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.557 [2024-11-20 16:00:58.737039] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:06:23.557 [2024-11-20 16:00:58.737108] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.557 [2024-11-20 16:00:58.839923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.557 [2024-11-20 16:00:58.894922] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:23.557 [2024-11-20 16:00:58.894977] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:23.557 [2024-11-20 16:00:58.894986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.557 [2024-11-20 16:00:58.894994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.557 [2024-11-20 16:00:58.895000] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:23.557 [2024-11-20 16:00:58.897108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.557 [2024-11-20 16:00:58.897270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.557 [2024-11-20 16:00:58.897442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.819 [2024-11-20 16:00:59.624773] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.819 Malloc0 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.819 Delay0 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.819 [2024-11-20 16:00:59.712852] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.819 16:00:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:24.080 [2024-11-20 16:00:59.821519] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:26.136 Initializing NVMe Controllers 00:06:26.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:26.136 controller IO queue size 128 less than required 00:06:26.136 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:26.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:26.136 Initialization complete. Launching workers. 00:06:26.136 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28438 00:06:26.136 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28503, failed to submit 62 00:06:26.136 success 28442, unsuccessful 61, failed 0 00:06:26.136 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:26.136 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.136 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.136 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.136 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:26.136 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:26.136 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:26.136 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:26.136 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:26.136 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:26.136 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:26.136 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:26.136 rmmod nvme_tcp 00:06:26.136 rmmod nvme_fabrics 00:06:26.136 rmmod nvme_keyring 00:06:26.136 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:26.136 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:26.136 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:26.136 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1062715 ']' 00:06:26.136 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1062715 00:06:26.136 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1062715 ']' 00:06:26.137 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1062715 00:06:26.137 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:26.137 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.137 16:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1062715 00:06:26.137 16:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:26.137 16:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:26.137 16:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1062715' 00:06:26.137 killing process with pid 1062715 00:06:26.137 16:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1062715 00:06:26.137 16:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1062715 00:06:26.397 16:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:26.397 16:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:26.397 16:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:26.397 16:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:26.397 16:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:26.397 16:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:26.397 16:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:26.397 16:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:26.397 16:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:26.397 16:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.397 16:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:26.397 16:01:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:28.311 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:28.311 00:06:28.311 real 0m13.317s 00:06:28.311 user 0m13.610s 00:06:28.311 sys 0m6.648s 00:06:28.311 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.311 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.311 ************************************ 00:06:28.311 END TEST nvmf_abort 00:06:28.311 ************************************ 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:28.573 ************************************ 00:06:28.573 START TEST nvmf_ns_hotplug_stress 00:06:28.573 ************************************ 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:28.573 * Looking for test storage... 00:06:28.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:28.573 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:28.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.835 --rc genhtml_branch_coverage=1 00:06:28.835 --rc genhtml_function_coverage=1 00:06:28.835 --rc genhtml_legend=1 00:06:28.835 --rc geninfo_all_blocks=1 00:06:28.835 --rc geninfo_unexecuted_blocks=1 00:06:28.835 00:06:28.835 ' 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:28.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.835 --rc genhtml_branch_coverage=1 00:06:28.835 --rc genhtml_function_coverage=1 00:06:28.835 --rc genhtml_legend=1 00:06:28.835 --rc geninfo_all_blocks=1 00:06:28.835 --rc geninfo_unexecuted_blocks=1 00:06:28.835 00:06:28.835 ' 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:28.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.835 --rc genhtml_branch_coverage=1 00:06:28.835 --rc genhtml_function_coverage=1 00:06:28.835 --rc genhtml_legend=1 00:06:28.835 --rc geninfo_all_blocks=1 00:06:28.835 --rc geninfo_unexecuted_blocks=1 00:06:28.835 00:06:28.835 ' 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:28.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.835 --rc genhtml_branch_coverage=1 00:06:28.835 --rc genhtml_function_coverage=1 00:06:28.835 --rc genhtml_legend=1 00:06:28.835 --rc geninfo_all_blocks=1 00:06:28.835 --rc geninfo_unexecuted_blocks=1 00:06:28.835 00:06:28.835 ' 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.835 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:28.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:28.836 16:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:36.979 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:36.979 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:36.979 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:36.980 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:36.980 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:36.980 16:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:36.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:36.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:06:36.980 00:06:36.980 --- 10.0.0.2 ping statistics --- 00:06:36.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:36.980 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:36.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:36.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:06:36.980 00:06:36.980 --- 10.0.0.1 ping statistics --- 00:06:36.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:36.980 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1067563 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1067563 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1067563 ']' 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.980 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:36.980 [2024-11-20 16:01:12.148332] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:06:36.980 [2024-11-20 16:01:12.148421] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.980 [2024-11-20 16:01:12.256140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.980 [2024-11-20 16:01:12.307658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:36.980 [2024-11-20 16:01:12.307710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:36.980 [2024-11-20 16:01:12.307719] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:36.980 [2024-11-20 16:01:12.307726] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:36.980 [2024-11-20 16:01:12.307732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:36.980 [2024-11-20 16:01:12.309603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.980 [2024-11-20 16:01:12.309763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.980 [2024-11-20 16:01:12.309764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.242 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.242 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:37.242 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:37.242 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:37.242 16:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:37.242 16:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:37.242 16:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:37.242 16:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:37.503 [2024-11-20 16:01:13.181827] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.503 16:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:37.503 16:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:37.763 [2024-11-20 16:01:13.580878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:37.764 16:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:38.024 16:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:38.285 Malloc0 00:06:38.285 16:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:38.285 Delay0 00:06:38.547 16:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.547 16:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:38.808 NULL1 00:06:38.808 16:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:39.069 16:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1068253 00:06:39.069 16:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:39.069 16:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:39.069 16:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.330 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.330 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:39.330 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:39.590 true 00:06:39.590 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:39.590 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.852 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.852 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:40.113 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:40.113 true 00:06:40.113 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:40.113 16:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.494 Read completed with error (sct=0, sc=11) 00:06:41.494 16:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.494 16:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:41.494 16:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:41.755 true 00:06:41.755 16:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:41.755 16:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.698 16:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.698 16:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:42.698 16:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:42.958 true 00:06:42.958 16:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:42.958 16:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.958 16:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.219 16:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:43.219 16:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:43.481 true 00:06:43.481 16:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:43.481 16:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.745 16:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.745 16:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:43.745 16:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:44.006 true 00:06:44.006 16:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:44.006 16:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.267 16:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.267 16:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:44.267 16:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:44.527 true 00:06:44.527 16:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:44.527 16:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.788 16:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.788 16:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:44.788 16:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:45.048 true 00:06:45.048 16:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:45.048 16:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.310 16:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.310 16:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:45.310 16:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:45.572 true 00:06:45.572 16:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:45.572 16:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.956 16:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.956 16:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:46.956 16:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:47.217 true 00:06:47.217 16:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:47.217 16:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.160 16:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.160 16:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:48.160 16:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:48.421 true 00:06:48.421 16:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:48.421 16:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.421 16:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.682 16:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:48.682 16:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:48.942 true 00:06:48.942 16:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:48.942 16:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.202 16:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.202 16:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:49.202 16:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:49.463 true 00:06:49.463 16:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:49.463 16:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.723 16:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.723 16:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:49.723 16:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:49.985 true 00:06:49.985 16:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:49.985 16:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.244 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.505 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:50.505 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:50.505 true 00:06:50.505 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:50.505 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.765 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.024 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:51.025 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:51.025 true 00:06:51.025 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:51.025 16:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.285 16:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.546 16:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:51.546 16:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:51.546 true 00:06:51.546 16:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:51.546 16:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.487 16:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.747 16:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:52.747 16:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:52.747 true 00:06:52.747 16:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:52.747 16:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.008 16:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.268 16:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:53.268 16:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:53.268 true 00:06:53.268 16:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:53.268 16:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.528 16:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.788 16:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:53.788 16:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:53.788 true 00:06:54.047 16:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:54.047 16:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.047 16:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.307 16:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:54.307 16:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:54.307 true 00:06:54.567 16:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:54.567 16:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.506 16:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.765 16:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:55.765 16:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:56.024 true 00:06:56.024 16:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:56.024 16:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.965 16:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.965 16:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:56.965 16:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:57.225 true 00:06:57.225 16:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:57.225 16:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.225 16:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.485 16:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:57.485 16:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:57.745 true 00:06:57.745 16:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:57.745 16:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.124 16:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.124 16:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:59.124 16:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:59.124 true 00:06:59.124 16:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:06:59.124 16:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.063 16:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.323 16:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:00.323 16:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:00.323 true 00:07:00.323 16:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:07:00.323 16:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.582 16:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.842 16:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:00.842 16:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:00.842 true 00:07:00.842 16:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:07:00.842 16:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.225 16:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.225 16:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:02.225 16:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:02.485 true 00:07:02.485 16:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:07:02.485 16:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.424 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.424 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:03.424 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:03.685 true 00:07:03.685 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:07:03.685 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.945 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.945 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:03.945 16:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:04.206 true 00:07:04.206 16:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:07:04.206 16:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.469 16:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.469 16:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:04.469 16:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:04.729 true 00:07:04.729 16:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:07:04.729 16:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.989 16:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.989 16:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:04.989 16:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:05.249 true 00:07:05.249 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:07:05.249 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.510 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.770 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:05.770 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:05.770 true 00:07:05.770 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:07:05.770 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.029 16:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.288 16:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:06.288 16:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:06.288 true 00:07:06.288 16:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:07:06.289 16:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.668 16:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.668 16:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:07.668 16:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:07.928 true 00:07:07.928 16:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:07:07.928 16:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.869 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.869 16:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.869 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.869 16:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:08.869 16:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:09.129 true 00:07:09.129 16:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:07:09.129 16:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.389 16:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.389 Initializing NVMe Controllers 00:07:09.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:09.389 Controller IO queue size 128, less than required. 00:07:09.389 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:09.389 Controller IO queue size 128, less than required. 00:07:09.389 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:09.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:09.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:09.389 Initialization complete. Launching workers. 00:07:09.389 ======================================================== 00:07:09.389 Latency(us) 00:07:09.389 Device Information : IOPS MiB/s Average min max 00:07:09.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1964.31 0.96 32594.43 1067.26 1007935.92 00:07:09.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15247.38 7.45 8367.74 1194.11 400764.04 00:07:09.389 ======================================================== 00:07:09.389 Total : 17211.69 8.40 11132.65 1067.26 1007935.92 00:07:09.389 00:07:09.389 16:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:09.389 16:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:09.648 true 00:07:09.648 16:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1068253 00:07:09.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1068253) - No such process 00:07:09.648 16:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1068253 00:07:09.648 16:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.907 16:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.907 16:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:09.907 16:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:09.907 16:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:09.907 16:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:09.907 16:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:10.166 null0 00:07:10.166 16:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.166 16:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.166 16:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:10.427 null1 00:07:10.427 16:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.427 16:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.427 16:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:10.427 null2 00:07:10.427 16:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.427 16:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.427 16:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:10.710 null3 00:07:10.710 16:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.710 16:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.710 16:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:11.045 null4 00:07:11.045 16:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:11.045 16:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:11.045 16:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:11.045 null5 00:07:11.045 16:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:11.045 16:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:11.045 16:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:11.339 null6 00:07:11.339 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:11.339 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:11.339 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:11.339 null7 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1074760 1074761 1074763 1074765 1074767 1074769 1074770 1074772 00:07:11.600 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:11.601 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:11.601 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.601 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.601 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.601 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.601 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:11.601 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.601 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:11.601 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.861 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.121 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.121 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.121 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.121 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.122 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.122 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.122 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.122 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.122 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.122 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.122 16:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.381 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.642 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.904 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.904 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.904 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.904 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.904 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.904 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.904 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.904 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.904 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.904 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.904 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.904 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.904 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.164 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.164 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.164 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.164 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.165 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.165 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.165 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:13.165 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.165 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.165 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.165 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:13.165 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.165 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:13.165 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.165 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.165 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:13.165 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.165 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.165 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:13.165 16:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.165 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.165 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.165 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:13.165 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.165 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:13.165 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:13.165 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.432 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.432 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.432 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.432 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.432 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:13.432 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.432 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.432 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.432 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:13.432 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.432 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.432 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:13.432 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.432 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.432 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:13.432 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.432 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.433 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.433 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.433 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.433 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.433 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:13.433 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.433 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.433 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:13.693 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.693 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.693 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:13.693 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.693 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:13.693 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.693 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.693 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.693 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.693 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:13.693 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.693 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:13.693 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.693 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.693 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.693 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.955 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.216 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.216 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.216 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.216 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.216 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.216 16:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.216 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.216 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.216 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.216 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.216 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.216 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.216 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.216 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.216 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.216 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.216 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.216 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.216 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.216 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.216 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.216 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.216 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.216 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.216 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.216 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.478 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.478 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.478 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.478 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.478 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.478 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.478 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.478 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.478 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.478 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.478 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.478 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.478 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.478 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.478 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.478 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.478 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.478 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.740 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:15.001 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.001 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.001 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:15.001 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.001 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.001 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:15.001 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:15.001 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.001 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:15.001 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.001 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:15.001 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.001 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.001 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:15.001 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.001 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.001 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.001 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.001 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:15.001 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:15.262 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.262 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.262 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.262 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.262 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.262 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:15.262 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:15.262 16:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:15.263 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.263 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.263 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.263 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.263 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.263 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.263 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.263 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.263 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.263 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.263 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:15.263 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:15.263 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:15.263 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:15.263 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:15.263 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:15.263 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:15.263 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:15.263 rmmod nvme_tcp 00:07:15.524 rmmod nvme_fabrics 00:07:15.524 rmmod nvme_keyring 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1067563 ']' 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1067563 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1067563 ']' 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1067563 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1067563 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1067563' 00:07:15.524 killing process with pid 1067563 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1067563 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1067563 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.524 16:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:18.074 00:07:18.074 real 0m49.189s 00:07:18.074 user 3m16.630s 00:07:18.074 sys 0m17.118s 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:18.074 ************************************ 00:07:18.074 END TEST nvmf_ns_hotplug_stress 00:07:18.074 ************************************ 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:18.074 ************************************ 00:07:18.074 START TEST nvmf_delete_subsystem 00:07:18.074 ************************************ 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:18.074 * Looking for test storage... 00:07:18.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:18.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.074 --rc genhtml_branch_coverage=1 00:07:18.074 --rc genhtml_function_coverage=1 00:07:18.074 --rc genhtml_legend=1 00:07:18.074 --rc geninfo_all_blocks=1 00:07:18.074 --rc geninfo_unexecuted_blocks=1 00:07:18.074 00:07:18.074 ' 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:18.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.074 --rc genhtml_branch_coverage=1 00:07:18.074 --rc genhtml_function_coverage=1 00:07:18.074 --rc genhtml_legend=1 00:07:18.074 --rc geninfo_all_blocks=1 00:07:18.074 --rc geninfo_unexecuted_blocks=1 00:07:18.074 00:07:18.074 ' 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:18.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.074 --rc genhtml_branch_coverage=1 00:07:18.074 --rc genhtml_function_coverage=1 00:07:18.074 --rc genhtml_legend=1 00:07:18.074 --rc geninfo_all_blocks=1 00:07:18.074 --rc geninfo_unexecuted_blocks=1 00:07:18.074 00:07:18.074 ' 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:18.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.074 --rc genhtml_branch_coverage=1 00:07:18.074 --rc genhtml_function_coverage=1 00:07:18.074 --rc genhtml_legend=1 00:07:18.074 --rc geninfo_all_blocks=1 00:07:18.074 --rc geninfo_unexecuted_blocks=1 00:07:18.074 00:07:18.074 ' 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.074 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:18.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:18.075 16:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:26.218 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:26.218 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:26.218 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:26.218 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:26.218 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:26.219 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:26.219 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:26.219 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:26.219 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:26.219 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.219 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.219 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:26.219 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:26.219 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:26.219 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:26.219 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:26.219 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:26.219 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:26.219 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.219 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:26.219 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:26.219 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:26.219 16:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:26.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:07:26.219 00:07:26.219 --- 10.0.0.2 ping statistics --- 00:07:26.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.219 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:26.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:07:26.219 00:07:26.219 --- 10.0.0.1 ping statistics --- 00:07:26.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.219 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1079947 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1079947 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1079947 ']' 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.219 [2024-11-20 16:02:01.337468] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:07:26.219 [2024-11-20 16:02:01.337533] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.219 [2024-11-20 16:02:01.425012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:26.219 [2024-11-20 16:02:01.483820] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.219 [2024-11-20 16:02:01.483884] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.219 [2024-11-20 16:02:01.483896] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.219 [2024-11-20 16:02:01.483906] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.219 [2024-11-20 16:02:01.483914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.219 [2024-11-20 16:02:01.485814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.219 [2024-11-20 16:02:01.485819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.219 [2024-11-20 16:02:01.639580] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.219 [2024-11-20 16:02:01.663920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.219 NULL1 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.219 Delay0 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.219 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1080062 00:07:26.220 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:26.220 16:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:26.220 [2024-11-20 16:02:01.790936] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:28.135 16:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:28.135 16:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.135 16:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 starting I/O failed: -6 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 starting I/O failed: -6 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 starting I/O failed: -6 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 starting I/O failed: -6 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 starting I/O failed: -6 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 starting I/O failed: -6 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 starting I/O failed: -6 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 starting I/O failed: -6 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 starting I/O failed: -6 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 starting I/O failed: -6 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 starting I/O failed: -6 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 [2024-11-20 16:02:03.875626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b3680 is same with the state(6) to be set 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 starting I/O failed: -6 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 starting I/O failed: -6 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 starting I/O failed: -6 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 starting I/O failed: -6 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 starting I/O failed: -6 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 starting I/O failed: -6 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 starting I/O failed: -6 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 starting I/O failed: -6 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 starting I/O failed: -6 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 starting I/O failed: -6 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 [2024-11-20 16:02:03.881133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff45800d680 is same with the state(6) to be set 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Write completed with error (sct=0, sc=8) 00:07:28.135 Read completed with error (sct=0, sc=8) 00:07:28.136 Write completed with error (sct=0, sc=8) 00:07:28.136 Write completed with error (sct=0, sc=8) 00:07:28.136 Write completed with error (sct=0, sc=8) 00:07:28.136 Read completed with error (sct=0, sc=8) 00:07:28.136 Write completed with error (sct=0, sc=8) 00:07:28.136 Read completed with error (sct=0, sc=8) 00:07:28.136 Write completed with error (sct=0, sc=8) 00:07:28.136 Read completed with error (sct=0, sc=8) 00:07:28.136 Write completed with error (sct=0, sc=8) 00:07:28.136 Write completed with error (sct=0, sc=8) 00:07:28.136 Read completed with error (sct=0, sc=8) 00:07:28.136 Read completed with error (sct=0, sc=8) 00:07:28.136 Read completed with error (sct=0, sc=8) 00:07:28.136 Write completed with error (sct=0, sc=8) 00:07:28.136 Read completed with error (sct=0, sc=8) 00:07:28.136 Write completed with error (sct=0, sc=8) 00:07:28.136 Write completed with error (sct=0, sc=8) 00:07:28.136 Read completed with error (sct=0, sc=8) 00:07:28.136 Read completed with error (sct=0, sc=8) 00:07:28.136 Read completed with error (sct=0, sc=8) 00:07:28.136 Read completed with error (sct=0, sc=8) 00:07:28.136 Write completed with error (sct=0, sc=8) 00:07:28.136 Write completed with error (sct=0, sc=8) 00:07:28.136 Read completed with error (sct=0, sc=8) 00:07:28.136 Write completed with error (sct=0, sc=8) 00:07:28.136 Write completed with error (sct=0, sc=8) 00:07:28.136 Read completed with error (sct=0, sc=8) 00:07:28.136 Write completed with error (sct=0, sc=8) 00:07:28.136 Write completed with error (sct=0, sc=8) 00:07:28.136 Read completed with error (sct=0, sc=8) 00:07:28.136 Read completed with error (sct=0, sc=8) 00:07:28.136 Read completed with error (sct=0, sc=8) 00:07:28.136 Read completed with error (sct=0, sc=8) 00:07:28.136 [2024-11-20 16:02:03.881650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff45800d020 is same with the state(6) to be set 00:07:29.078 [2024-11-20 16:02:04.847637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b49a0 is same with the state(6) to be set 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 [2024-11-20 16:02:04.879211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b34a0 is same with the state(6) to be set 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 [2024-11-20 16:02:04.879552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b3860 is same with the state(6) to be set 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 [2024-11-20 16:02:04.883121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff45800d350 is same with the state(6) to be set 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Read completed with error (sct=0, sc=8) 00:07:29.078 Write completed with error (sct=0, sc=8) 00:07:29.078 [2024-11-20 16:02:04.883409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff458000c40 is same with the state(6) to be set 00:07:29.078 Initializing NVMe Controllers 00:07:29.078 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:29.078 Controller IO queue size 128, less than required. 00:07:29.078 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:29.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:29.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:29.078 Initialization complete. Launching workers. 00:07:29.078 ======================================================== 00:07:29.078 Latency(us) 00:07:29.078 Device Information : IOPS MiB/s Average min max 00:07:29.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.78 0.08 892476.31 374.17 1006963.34 00:07:29.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.83 0.08 961622.29 531.03 2002363.51 00:07:29.078 ======================================================== 00:07:29.078 Total : 330.61 0.16 925903.81 374.17 2002363.51 00:07:29.078 00:07:29.078 [2024-11-20 16:02:04.883794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b49a0 (9): Bad file descriptor 00:07:29.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:29.078 16:02:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.078 16:02:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:29.078 16:02:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1080062 00:07:29.078 16:02:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1080062 00:07:29.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1080062) - No such process 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1080062 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1080062 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1080062 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.650 [2024-11-20 16:02:05.413812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1080879 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1080879 00:07:29.650 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:29.650 [2024-11-20 16:02:05.512480] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:30.220 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:30.220 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1080879 00:07:30.220 16:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:30.790 16:02:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:30.790 16:02:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1080879 00:07:30.790 16:02:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:31.050 16:02:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:31.050 16:02:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1080879 00:07:31.050 16:02:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:31.620 16:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:31.620 16:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1080879 00:07:31.620 16:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:32.189 16:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:32.189 16:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1080879 00:07:32.189 16:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:32.760 16:02:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:32.760 16:02:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1080879 00:07:32.760 16:02:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:33.020 Initializing NVMe Controllers 00:07:33.020 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:33.020 Controller IO queue size 128, less than required. 00:07:33.020 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:33.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:33.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:33.020 Initialization complete. Launching workers. 00:07:33.020 ======================================================== 00:07:33.020 Latency(us) 00:07:33.020 Device Information : IOPS MiB/s Average min max 00:07:33.020 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002221.24 1000201.35 1042098.35 00:07:33.020 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003231.45 1000235.81 1042234.77 00:07:33.020 ======================================================== 00:07:33.020 Total : 256.00 0.12 1002726.34 1000201.35 1042234.77 00:07:33.020 00:07:33.280 16:02:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:33.280 16:02:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1080879 00:07:33.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1080879) - No such process 00:07:33.280 16:02:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1080879 00:07:33.280 16:02:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:33.280 16:02:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:33.280 16:02:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:33.280 16:02:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:33.280 16:02:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:33.280 16:02:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:33.280 16:02:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:33.280 16:02:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:33.280 rmmod nvme_tcp 00:07:33.280 rmmod nvme_fabrics 00:07:33.280 rmmod nvme_keyring 00:07:33.280 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:33.280 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:33.280 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:33.280 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1079947 ']' 00:07:33.280 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1079947 00:07:33.280 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1079947 ']' 00:07:33.280 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1079947 00:07:33.280 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:33.280 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.280 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1079947 00:07:33.280 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.280 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.280 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1079947' 00:07:33.280 killing process with pid 1079947 00:07:33.280 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1079947 00:07:33.280 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1079947 00:07:33.280 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:33.280 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:33.280 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:33.280 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:33.280 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:33.280 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:33.280 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:33.540 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:33.540 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:33.540 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.540 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.540 16:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.450 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:35.450 00:07:35.450 real 0m17.705s 00:07:35.450 user 0m29.541s 00:07:35.450 sys 0m6.782s 00:07:35.450 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.450 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.450 ************************************ 00:07:35.450 END TEST nvmf_delete_subsystem 00:07:35.450 ************************************ 00:07:35.450 16:02:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:35.450 16:02:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:35.450 16:02:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.450 16:02:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:35.450 ************************************ 00:07:35.450 START TEST nvmf_host_management 00:07:35.450 ************************************ 00:07:35.450 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:35.712 * Looking for test storage... 00:07:35.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:35.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.712 --rc genhtml_branch_coverage=1 00:07:35.712 --rc genhtml_function_coverage=1 00:07:35.712 --rc genhtml_legend=1 00:07:35.712 --rc geninfo_all_blocks=1 00:07:35.712 --rc geninfo_unexecuted_blocks=1 00:07:35.712 00:07:35.712 ' 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:35.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.712 --rc genhtml_branch_coverage=1 00:07:35.712 --rc genhtml_function_coverage=1 00:07:35.712 --rc genhtml_legend=1 00:07:35.712 --rc geninfo_all_blocks=1 00:07:35.712 --rc geninfo_unexecuted_blocks=1 00:07:35.712 00:07:35.712 ' 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:35.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.712 --rc genhtml_branch_coverage=1 00:07:35.712 --rc genhtml_function_coverage=1 00:07:35.712 --rc genhtml_legend=1 00:07:35.712 --rc geninfo_all_blocks=1 00:07:35.712 --rc geninfo_unexecuted_blocks=1 00:07:35.712 00:07:35.712 ' 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:35.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.712 --rc genhtml_branch_coverage=1 00:07:35.712 --rc genhtml_function_coverage=1 00:07:35.712 --rc genhtml_legend=1 00:07:35.712 --rc geninfo_all_blocks=1 00:07:35.712 --rc geninfo_unexecuted_blocks=1 00:07:35.712 00:07:35.712 ' 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.712 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:35.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:35.713 16:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:43.850 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:43.850 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:43.850 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:43.850 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:43.850 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:43.850 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:43.850 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:43.850 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:43.850 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:43.850 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:43.851 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:43.851 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:43.851 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:43.851 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:43.851 16:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:43.851 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:43.851 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:43.851 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:43.851 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:43.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:43.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:07:43.851 00:07:43.851 --- 10.0.0.2 ping statistics --- 00:07:43.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.851 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:07:43.851 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:43.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:43.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:07:43.851 00:07:43.851 --- 10.0.0.1 ping statistics --- 00:07:43.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.851 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:07:43.851 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:43.851 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:43.851 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:43.851 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:43.851 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:43.851 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:43.852 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:43.852 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:43.852 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:43.852 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:43.852 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:43.852 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:43.852 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:43.852 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:43.852 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:43.852 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1085850 00:07:43.852 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1085850 00:07:43.852 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:43.852 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1085850 ']' 00:07:43.852 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.852 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.852 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.852 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.852 16:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:43.852 [2024-11-20 16:02:19.206146] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:07:43.852 [2024-11-20 16:02:19.206237] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.852 [2024-11-20 16:02:19.308308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.852 [2024-11-20 16:02:19.362235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.852 [2024-11-20 16:02:19.362286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.852 [2024-11-20 16:02:19.362295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.852 [2024-11-20 16:02:19.362303] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.852 [2024-11-20 16:02:19.362309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.852 [2024-11-20 16:02:19.364261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.852 [2024-11-20 16:02:19.364437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.852 [2024-11-20 16:02:19.364596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:43.852 [2024-11-20 16:02:19.364598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.113 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.113 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:44.113 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:44.113 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:44.113 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.376 [2024-11-20 16:02:20.081026] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.376 Malloc0 00:07:44.376 [2024-11-20 16:02:20.168105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1086050 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1086050 /var/tmp/bdevperf.sock 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1086050 ']' 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:44.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:44.376 { 00:07:44.376 "params": { 00:07:44.376 "name": "Nvme$subsystem", 00:07:44.376 "trtype": "$TEST_TRANSPORT", 00:07:44.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:44.376 "adrfam": "ipv4", 00:07:44.376 "trsvcid": "$NVMF_PORT", 00:07:44.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:44.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:44.376 "hdgst": ${hdgst:-false}, 00:07:44.376 "ddgst": ${ddgst:-false} 00:07:44.376 }, 00:07:44.376 "method": "bdev_nvme_attach_controller" 00:07:44.376 } 00:07:44.376 EOF 00:07:44.376 )") 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:44.376 16:02:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:44.376 "params": { 00:07:44.376 "name": "Nvme0", 00:07:44.376 "trtype": "tcp", 00:07:44.376 "traddr": "10.0.0.2", 00:07:44.376 "adrfam": "ipv4", 00:07:44.376 "trsvcid": "4420", 00:07:44.376 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:44.376 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:44.376 "hdgst": false, 00:07:44.376 "ddgst": false 00:07:44.376 }, 00:07:44.376 "method": "bdev_nvme_attach_controller" 00:07:44.376 }' 00:07:44.376 [2024-11-20 16:02:20.276532] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:07:44.376 [2024-11-20 16:02:20.276602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086050 ] 00:07:44.639 [2024-11-20 16:02:20.371420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.639 [2024-11-20 16:02:20.424911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.899 Running I/O for 10 seconds... 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=613 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 613 -ge 100 ']' 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.471 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.471 [2024-11-20 16:02:21.163757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.163849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.163858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.163865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.163872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.163879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.163886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.163893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.163900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.163907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.163914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.163921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.163927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.163934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.163941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.163948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.163955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.163962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.163970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.163976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.163983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.163989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.163997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.164004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.164011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.164019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.164026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.164033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.164042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.164048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.164055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.164063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.164071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.164077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.164084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.164091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.471 [2024-11-20 16:02:21.164098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.472 [2024-11-20 16:02:21.164105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.472 [2024-11-20 16:02:21.164112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.472 [2024-11-20 16:02:21.164118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17150 is same with the state(6) to be set 00:07:45.472 [2024-11-20 16:02:21.166515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:45.472 [2024-11-20 16:02:21.166580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.166593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:45.472 [2024-11-20 16:02:21.166604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.166613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:45.472 [2024-11-20 16:02:21.166622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.166630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:45.472 [2024-11-20 16:02:21.166638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.166646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c000 is same with the state(6) to be set 00:07:45.472 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.472 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:45.472 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.472 [2024-11-20 16:02:21.169719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.169757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.169774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.169793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.169805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.169812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.169822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.169830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.169840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.472 [2024-11-20 16:02:21.169848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.169859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.169868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.169878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.169886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.169897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.169905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.169914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.169922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.169932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.169941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.169950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.169957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.169967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.169974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.169984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.169991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.170001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.170009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.170022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.170032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.170042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.170050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.170060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.170068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.170078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.170086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.170095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.170103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.170113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.170121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.170131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.170139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.170149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.170156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.170173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.170181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.170191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.170199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.170209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.170216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.170227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.170234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.170243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.170253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.170264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.170271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.170281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.170288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.472 [2024-11-20 16:02:21.170298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.472 [2024-11-20 16:02:21.170305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.170907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.473 [2024-11-20 16:02:21.170916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.473 [2024-11-20 16:02:21.172280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:45.473 task offset: 84864 on job bdev=Nvme0n1 fails 00:07:45.473 00:07:45.473 Latency(us) 00:07:45.473 [2024-11-20T15:02:21.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.473 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:45.473 Job: Nvme0n1 ended in about 0.45 seconds with error 00:07:45.473 Verification LBA range: start 0x0 length 0x400 00:07:45.473 Nvme0n1 : 0.45 1462.48 91.41 141.18 0.00 38738.88 1720.32 34297.17 00:07:45.473 [2024-11-20T15:02:21.409Z] =================================================================================================================== 00:07:45.473 [2024-11-20T15:02:21.409Z] Total : 1462.48 91.41 141.18 0.00 38738.88 1720.32 34297.17 00:07:45.473 [2024-11-20 16:02:21.174497] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.473 [2024-11-20 16:02:21.174536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5c000 (9): Bad file descriptor 00:07:45.473 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.473 16:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:45.474 [2024-11-20 16:02:21.185922] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:46.416 16:02:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1086050 00:07:46.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1086050) - No such process 00:07:46.416 16:02:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:46.416 16:02:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:46.416 16:02:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:46.416 16:02:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:46.416 16:02:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:46.416 16:02:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:46.416 16:02:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:46.417 16:02:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:46.417 { 00:07:46.417 "params": { 00:07:46.417 "name": "Nvme$subsystem", 00:07:46.417 "trtype": "$TEST_TRANSPORT", 00:07:46.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:46.417 "adrfam": "ipv4", 00:07:46.417 "trsvcid": "$NVMF_PORT", 00:07:46.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:46.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:46.417 "hdgst": ${hdgst:-false}, 00:07:46.417 "ddgst": ${ddgst:-false} 00:07:46.417 }, 00:07:46.417 "method": "bdev_nvme_attach_controller" 00:07:46.417 } 00:07:46.417 EOF 00:07:46.417 )") 00:07:46.417 16:02:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:46.417 16:02:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:46.417 16:02:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:46.417 16:02:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:46.417 "params": { 00:07:46.417 "name": "Nvme0", 00:07:46.417 "trtype": "tcp", 00:07:46.417 "traddr": "10.0.0.2", 00:07:46.417 "adrfam": "ipv4", 00:07:46.417 "trsvcid": "4420", 00:07:46.417 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:46.417 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:46.417 "hdgst": false, 00:07:46.417 "ddgst": false 00:07:46.417 }, 00:07:46.417 "method": "bdev_nvme_attach_controller" 00:07:46.417 }' 00:07:46.417 [2024-11-20 16:02:22.242283] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:07:46.417 [2024-11-20 16:02:22.242341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086410 ] 00:07:46.417 [2024-11-20 16:02:22.328871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.677 [2024-11-20 16:02:22.364682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.677 Running I/O for 1 seconds... 00:07:47.619 1597.00 IOPS, 99.81 MiB/s 00:07:47.619 Latency(us) 00:07:47.619 [2024-11-20T15:02:23.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.619 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:47.619 Verification LBA range: start 0x0 length 0x400 00:07:47.619 Nvme0n1 : 1.02 1629.23 101.83 0.00 0.00 38608.76 6963.20 34734.08 00:07:47.619 [2024-11-20T15:02:23.555Z] =================================================================================================================== 00:07:47.619 [2024-11-20T15:02:23.555Z] Total : 1629.23 101.83 0.00 0.00 38608.76 6963.20 34734.08 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:47.880 rmmod nvme_tcp 00:07:47.880 rmmod nvme_fabrics 00:07:47.880 rmmod nvme_keyring 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1085850 ']' 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1085850 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1085850 ']' 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1085850 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1085850 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1085850' 00:07:47.880 killing process with pid 1085850 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1085850 00:07:47.880 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1085850 00:07:48.141 [2024-11-20 16:02:23.878828] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:48.141 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:48.141 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:48.141 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:48.141 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:48.141 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:48.141 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:48.141 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:48.141 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:48.141 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:48.141 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.141 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.141 16:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.056 16:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:50.056 16:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:50.056 00:07:50.056 real 0m14.617s 00:07:50.056 user 0m22.766s 00:07:50.056 sys 0m6.810s 00:07:50.056 16:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.056 16:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.056 ************************************ 00:07:50.056 END TEST nvmf_host_management 00:07:50.056 ************************************ 00:07:50.317 16:02:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:50.317 16:02:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:50.317 16:02:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.318 16:02:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:50.318 ************************************ 00:07:50.318 START TEST nvmf_lvol 00:07:50.318 ************************************ 00:07:50.318 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:50.318 * Looking for test storage... 00:07:50.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.318 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:50.318 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:50.318 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:50.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.579 --rc genhtml_branch_coverage=1 00:07:50.579 --rc genhtml_function_coverage=1 00:07:50.579 --rc genhtml_legend=1 00:07:50.579 --rc geninfo_all_blocks=1 00:07:50.579 --rc geninfo_unexecuted_blocks=1 00:07:50.579 00:07:50.579 ' 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:50.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.579 --rc genhtml_branch_coverage=1 00:07:50.579 --rc genhtml_function_coverage=1 00:07:50.579 --rc genhtml_legend=1 00:07:50.579 --rc geninfo_all_blocks=1 00:07:50.579 --rc geninfo_unexecuted_blocks=1 00:07:50.579 00:07:50.579 ' 00:07:50.579 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:50.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.580 --rc genhtml_branch_coverage=1 00:07:50.580 --rc genhtml_function_coverage=1 00:07:50.580 --rc genhtml_legend=1 00:07:50.580 --rc geninfo_all_blocks=1 00:07:50.580 --rc geninfo_unexecuted_blocks=1 00:07:50.580 00:07:50.580 ' 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:50.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.580 --rc genhtml_branch_coverage=1 00:07:50.580 --rc genhtml_function_coverage=1 00:07:50.580 --rc genhtml_legend=1 00:07:50.580 --rc geninfo_all_blocks=1 00:07:50.580 --rc geninfo_unexecuted_blocks=1 00:07:50.580 00:07:50.580 ' 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:50.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:50.580 16:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:58.722 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:58.722 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:58.722 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:58.723 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:58.723 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:58.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:58.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:07:58.723 00:07:58.723 --- 10.0.0.2 ping statistics --- 00:07:58.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.723 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:58.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:58.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:07:58.723 00:07:58.723 --- 10.0.0.1 ping statistics --- 00:07:58.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.723 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1091084 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1091084 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1091084 ']' 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.723 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.724 16:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:58.724 [2024-11-20 16:02:33.862702] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:07:58.724 [2024-11-20 16:02:33.862773] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.724 [2024-11-20 16:02:33.937650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:58.724 [2024-11-20 16:02:33.984609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.724 [2024-11-20 16:02:33.984661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.724 [2024-11-20 16:02:33.984668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:58.724 [2024-11-20 16:02:33.984674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:58.724 [2024-11-20 16:02:33.984679] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.724 [2024-11-20 16:02:33.989190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.724 [2024-11-20 16:02:33.989268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.724 [2024-11-20 16:02:33.989283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.724 16:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.724 16:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:58.724 16:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:58.724 16:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:58.724 16:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:58.724 16:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.724 16:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:58.724 [2024-11-20 16:02:34.310342] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.724 16:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:58.724 16:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:58.724 16:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:58.985 16:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:58.985 16:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:59.246 16:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:59.507 16:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a1882df3-62a9-41ab-a562-2c9f0f690d95 00:07:59.507 16:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a1882df3-62a9-41ab-a562-2c9f0f690d95 lvol 20 00:07:59.507 16:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ca8a0010-5b0a-4367-9a29-aa898cf06a45 00:07:59.507 16:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:59.769 16:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ca8a0010-5b0a-4367-9a29-aa898cf06a45 00:08:00.029 16:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:00.289 [2024-11-20 16:02:35.967416] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.289 16:02:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:00.289 16:02:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1091485 00:08:00.289 16:02:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:00.289 16:02:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:01.675 16:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ca8a0010-5b0a-4367-9a29-aa898cf06a45 MY_SNAPSHOT 00:08:01.675 16:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=19dea3fb-bf5e-4c7b-8675-e4a16f1b10d6 00:08:01.675 16:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ca8a0010-5b0a-4367-9a29-aa898cf06a45 30 00:08:01.675 16:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 19dea3fb-bf5e-4c7b-8675-e4a16f1b10d6 MY_CLONE 00:08:01.935 16:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=bf3aa496-9309-4ce0-93fd-8a46840ceafe 00:08:01.935 16:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate bf3aa496-9309-4ce0-93fd-8a46840ceafe 00:08:02.196 16:02:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1091485 00:08:12.220 Initializing NVMe Controllers 00:08:12.220 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:12.220 Controller IO queue size 128, less than required. 00:08:12.220 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:12.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:12.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:12.220 Initialization complete. Launching workers. 00:08:12.220 ======================================================== 00:08:12.220 Latency(us) 00:08:12.220 Device Information : IOPS MiB/s Average min max 00:08:12.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17357.79 67.80 7375.02 942.82 45165.57 00:08:12.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15744.40 61.50 8131.13 3224.73 43206.51 00:08:12.220 ======================================================== 00:08:12.220 Total : 33102.19 129.31 7734.65 942.82 45165.57 00:08:12.220 00:08:12.220 16:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:12.220 16:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ca8a0010-5b0a-4367-9a29-aa898cf06a45 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a1882df3-62a9-41ab-a562-2c9f0f690d95 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:12.220 rmmod nvme_tcp 00:08:12.220 rmmod nvme_fabrics 00:08:12.220 rmmod nvme_keyring 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1091084 ']' 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1091084 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1091084 ']' 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1091084 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1091084 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1091084' 00:08:12.220 killing process with pid 1091084 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1091084 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1091084 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:12.220 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:12.221 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.221 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.221 16:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.139 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:14.139 00:08:14.139 real 0m23.542s 00:08:14.139 user 1m3.522s 00:08:14.139 sys 0m8.853s 00:08:14.139 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.139 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:14.139 ************************************ 00:08:14.139 END TEST nvmf_lvol 00:08:14.139 ************************************ 00:08:14.139 16:02:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:14.139 16:02:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:14.139 16:02:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.139 16:02:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:14.139 ************************************ 00:08:14.139 START TEST nvmf_lvs_grow 00:08:14.139 ************************************ 00:08:14.139 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:14.139 * Looking for test storage... 00:08:14.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:14.139 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:14.139 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:14.139 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:14.139 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:14.139 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.139 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.139 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.139 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:14.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.140 --rc genhtml_branch_coverage=1 00:08:14.140 --rc genhtml_function_coverage=1 00:08:14.140 --rc genhtml_legend=1 00:08:14.140 --rc geninfo_all_blocks=1 00:08:14.140 --rc geninfo_unexecuted_blocks=1 00:08:14.140 00:08:14.140 ' 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:14.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.140 --rc genhtml_branch_coverage=1 00:08:14.140 --rc genhtml_function_coverage=1 00:08:14.140 --rc genhtml_legend=1 00:08:14.140 --rc geninfo_all_blocks=1 00:08:14.140 --rc geninfo_unexecuted_blocks=1 00:08:14.140 00:08:14.140 ' 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:14.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.140 --rc genhtml_branch_coverage=1 00:08:14.140 --rc genhtml_function_coverage=1 00:08:14.140 --rc genhtml_legend=1 00:08:14.140 --rc geninfo_all_blocks=1 00:08:14.140 --rc geninfo_unexecuted_blocks=1 00:08:14.140 00:08:14.140 ' 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:14.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.140 --rc genhtml_branch_coverage=1 00:08:14.140 --rc genhtml_function_coverage=1 00:08:14.140 --rc genhtml_legend=1 00:08:14.140 --rc geninfo_all_blocks=1 00:08:14.140 --rc geninfo_unexecuted_blocks=1 00:08:14.140 00:08:14.140 ' 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:14.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.140 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.141 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.141 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:14.141 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:14.141 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:14.141 16:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:22.383 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:22.383 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.383 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:22.383 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:22.384 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:22.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:08:22.384 00:08:22.384 --- 10.0.0.2 ping statistics --- 00:08:22.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.384 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:22.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:08:22.384 00:08:22.384 --- 10.0.0.1 ping statistics --- 00:08:22.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.384 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1098123 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1098123 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1098123 ']' 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:22.384 [2024-11-20 16:02:57.544230] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:08:22.384 [2024-11-20 16:02:57.544295] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.384 [2024-11-20 16:02:57.619216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.384 [2024-11-20 16:02:57.666546] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.384 [2024-11-20 16:02:57.666598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.384 [2024-11-20 16:02:57.666606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.384 [2024-11-20 16:02:57.666612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.384 [2024-11-20 16:02:57.666616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.384 [2024-11-20 16:02:57.667356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.384 16:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:22.384 [2024-11-20 16:02:57.988230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.384 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:22.384 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:22.384 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.384 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:22.384 ************************************ 00:08:22.384 START TEST lvs_grow_clean 00:08:22.384 ************************************ 00:08:22.384 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:22.384 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:22.384 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:22.384 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:22.384 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:22.384 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:22.384 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:22.384 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:22.385 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:22.385 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:22.385 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:22.385 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:22.646 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e0bec887-12e0-4062-88b3-d237fee7dfd0 00:08:22.646 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bec887-12e0-4062-88b3-d237fee7dfd0 00:08:22.646 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:22.908 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:22.908 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:22.908 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e0bec887-12e0-4062-88b3-d237fee7dfd0 lvol 150 00:08:23.169 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=03040703-d884-4f09-9ca3-72197179aec3 00:08:23.169 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:23.169 16:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:23.169 [2024-11-20 16:02:59.004983] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:23.169 [2024-11-20 16:02:59.005059] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:23.169 true 00:08:23.169 16:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bec887-12e0-4062-88b3-d237fee7dfd0 00:08:23.169 16:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:23.430 16:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:23.430 16:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:23.691 16:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 03040703-d884-4f09-9ca3-72197179aec3 00:08:23.691 16:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:23.951 [2024-11-20 16:02:59.735404] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.951 16:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:24.212 16:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1098542 00:08:24.212 16:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:24.212 16:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:24.212 16:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1098542 /var/tmp/bdevperf.sock 00:08:24.212 16:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1098542 ']' 00:08:24.212 16:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:24.212 16:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.212 16:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:24.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:24.212 16:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.212 16:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:24.212 [2024-11-20 16:03:00.001971] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:08:24.212 [2024-11-20 16:03:00.002042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1098542 ] 00:08:24.212 [2024-11-20 16:03:00.098332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.472 [2024-11-20 16:03:00.161742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.044 16:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.044 16:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:25.044 16:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:25.304 Nvme0n1 00:08:25.304 16:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:25.566 [ 00:08:25.566 { 00:08:25.566 "name": "Nvme0n1", 00:08:25.566 "aliases": [ 00:08:25.566 "03040703-d884-4f09-9ca3-72197179aec3" 00:08:25.566 ], 00:08:25.566 "product_name": "NVMe disk", 00:08:25.566 "block_size": 4096, 00:08:25.566 "num_blocks": 38912, 00:08:25.566 "uuid": "03040703-d884-4f09-9ca3-72197179aec3", 00:08:25.566 "numa_id": 0, 00:08:25.566 "assigned_rate_limits": { 00:08:25.566 "rw_ios_per_sec": 0, 00:08:25.566 "rw_mbytes_per_sec": 0, 00:08:25.566 "r_mbytes_per_sec": 0, 00:08:25.566 "w_mbytes_per_sec": 0 00:08:25.566 }, 00:08:25.566 "claimed": false, 00:08:25.566 "zoned": false, 00:08:25.566 "supported_io_types": { 00:08:25.566 "read": true, 00:08:25.566 "write": true, 00:08:25.566 "unmap": true, 00:08:25.566 "flush": true, 00:08:25.566 "reset": true, 00:08:25.566 "nvme_admin": true, 00:08:25.566 "nvme_io": true, 00:08:25.566 "nvme_io_md": false, 00:08:25.566 "write_zeroes": true, 00:08:25.566 "zcopy": false, 00:08:25.566 "get_zone_info": false, 00:08:25.566 "zone_management": false, 00:08:25.566 "zone_append": false, 00:08:25.566 "compare": true, 00:08:25.566 "compare_and_write": true, 00:08:25.566 "abort": true, 00:08:25.566 "seek_hole": false, 00:08:25.566 "seek_data": false, 00:08:25.566 "copy": true, 00:08:25.566 "nvme_iov_md": false 00:08:25.566 }, 00:08:25.566 "memory_domains": [ 00:08:25.566 { 00:08:25.566 "dma_device_id": "system", 00:08:25.566 "dma_device_type": 1 00:08:25.566 } 00:08:25.566 ], 00:08:25.566 "driver_specific": { 00:08:25.566 "nvme": [ 00:08:25.566 { 00:08:25.566 "trid": { 00:08:25.566 "trtype": "TCP", 00:08:25.566 "adrfam": "IPv4", 00:08:25.566 "traddr": "10.0.0.2", 00:08:25.566 "trsvcid": "4420", 00:08:25.566 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:25.566 }, 00:08:25.566 "ctrlr_data": { 00:08:25.566 "cntlid": 1, 00:08:25.566 "vendor_id": "0x8086", 00:08:25.566 "model_number": "SPDK bdev Controller", 00:08:25.566 "serial_number": "SPDK0", 00:08:25.566 "firmware_revision": "25.01", 00:08:25.566 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:25.566 "oacs": { 00:08:25.566 "security": 0, 00:08:25.566 "format": 0, 00:08:25.566 "firmware": 0, 00:08:25.566 "ns_manage": 0 00:08:25.566 }, 00:08:25.566 "multi_ctrlr": true, 00:08:25.566 "ana_reporting": false 00:08:25.566 }, 00:08:25.566 "vs": { 00:08:25.566 "nvme_version": "1.3" 00:08:25.566 }, 00:08:25.566 "ns_data": { 00:08:25.566 "id": 1, 00:08:25.566 "can_share": true 00:08:25.566 } 00:08:25.566 } 00:08:25.566 ], 00:08:25.566 "mp_policy": "active_passive" 00:08:25.566 } 00:08:25.566 } 00:08:25.566 ] 00:08:25.566 16:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:25.566 16:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1098959 00:08:25.566 16:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:25.566 Running I/O for 10 seconds... 00:08:26.949 Latency(us) 00:08:26.949 [2024-11-20T15:03:02.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.949 Nvme0n1 : 1.00 24591.00 96.06 0.00 0.00 0.00 0.00 0.00 00:08:26.949 [2024-11-20T15:03:02.885Z] =================================================================================================================== 00:08:26.949 [2024-11-20T15:03:02.885Z] Total : 24591.00 96.06 0.00 0.00 0.00 0.00 0.00 00:08:26.949 00:08:27.517 16:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e0bec887-12e0-4062-88b3-d237fee7dfd0 00:08:27.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.777 Nvme0n1 : 2.00 24991.00 97.62 0.00 0.00 0.00 0.00 0.00 00:08:27.777 [2024-11-20T15:03:03.713Z] =================================================================================================================== 00:08:27.777 [2024-11-20T15:03:03.713Z] Total : 24991.00 97.62 0.00 0.00 0.00 0.00 0.00 00:08:27.777 00:08:27.777 true 00:08:27.777 16:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bec887-12e0-4062-88b3-d237fee7dfd0 00:08:27.777 16:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:27.777 16:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:27.777 16:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:27.777 16:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1098959 00:08:28.718 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.718 Nvme0n1 : 3.00 25130.33 98.17 0.00 0.00 0.00 0.00 0.00 00:08:28.718 [2024-11-20T15:03:04.654Z] =================================================================================================================== 00:08:28.718 [2024-11-20T15:03:04.654Z] Total : 25130.33 98.17 0.00 0.00 0.00 0.00 0.00 00:08:28.718 00:08:29.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.657 Nvme0n1 : 4.00 25224.25 98.53 0.00 0.00 0.00 0.00 0.00 00:08:29.657 [2024-11-20T15:03:05.593Z] =================================================================================================================== 00:08:29.657 [2024-11-20T15:03:05.593Z] Total : 25224.25 98.53 0.00 0.00 0.00 0.00 0.00 00:08:29.657 00:08:30.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.597 Nvme0n1 : 5.00 25283.20 98.76 0.00 0.00 0.00 0.00 0.00 00:08:30.597 [2024-11-20T15:03:06.533Z] =================================================================================================================== 00:08:30.597 [2024-11-20T15:03:06.533Z] Total : 25283.20 98.76 0.00 0.00 0.00 0.00 0.00 00:08:30.597 00:08:31.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.537 Nvme0n1 : 6.00 25335.67 98.97 0.00 0.00 0.00 0.00 0.00 00:08:31.537 [2024-11-20T15:03:07.473Z] =================================================================================================================== 00:08:31.537 [2024-11-20T15:03:07.473Z] Total : 25335.67 98.97 0.00 0.00 0.00 0.00 0.00 00:08:31.537 00:08:32.919 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.919 Nvme0n1 : 7.00 25366.71 99.09 0.00 0.00 0.00 0.00 0.00 00:08:32.919 [2024-11-20T15:03:08.855Z] =================================================================================================================== 00:08:32.919 [2024-11-20T15:03:08.855Z] Total : 25366.71 99.09 0.00 0.00 0.00 0.00 0.00 00:08:32.919 00:08:33.860 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.860 Nvme0n1 : 8.00 25392.75 99.19 0.00 0.00 0.00 0.00 0.00 00:08:33.860 [2024-11-20T15:03:09.796Z] =================================================================================================================== 00:08:33.860 [2024-11-20T15:03:09.796Z] Total : 25392.75 99.19 0.00 0.00 0.00 0.00 0.00 00:08:33.860 00:08:34.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.798 Nvme0n1 : 9.00 25401.56 99.22 0.00 0.00 0.00 0.00 0.00 00:08:34.798 [2024-11-20T15:03:10.734Z] =================================================================================================================== 00:08:34.798 [2024-11-20T15:03:10.734Z] Total : 25401.56 99.22 0.00 0.00 0.00 0.00 0.00 00:08:34.798 00:08:35.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.736 Nvme0n1 : 10.00 25421.00 99.30 0.00 0.00 0.00 0.00 0.00 00:08:35.736 [2024-11-20T15:03:11.672Z] =================================================================================================================== 00:08:35.736 [2024-11-20T15:03:11.672Z] Total : 25421.00 99.30 0.00 0.00 0.00 0.00 0.00 00:08:35.736 00:08:35.736 00:08:35.736 Latency(us) 00:08:35.736 [2024-11-20T15:03:11.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.736 Nvme0n1 : 10.01 25420.88 99.30 0.00 0.00 5031.50 2484.91 13653.33 00:08:35.736 [2024-11-20T15:03:11.672Z] =================================================================================================================== 00:08:35.736 [2024-11-20T15:03:11.672Z] Total : 25420.88 99.30 0.00 0.00 5031.50 2484.91 13653.33 00:08:35.736 { 00:08:35.736 "results": [ 00:08:35.736 { 00:08:35.736 "job": "Nvme0n1", 00:08:35.736 "core_mask": "0x2", 00:08:35.736 "workload": "randwrite", 00:08:35.736 "status": "finished", 00:08:35.736 "queue_depth": 128, 00:08:35.736 "io_size": 4096, 00:08:35.736 "runtime": 10.005082, 00:08:35.736 "iops": 25420.8811082208, 00:08:35.736 "mibps": 99.3003168289875, 00:08:35.736 "io_failed": 0, 00:08:35.736 "io_timeout": 0, 00:08:35.736 "avg_latency_us": 5031.496351049915, 00:08:35.736 "min_latency_us": 2484.9066666666668, 00:08:35.736 "max_latency_us": 13653.333333333334 00:08:35.736 } 00:08:35.736 ], 00:08:35.736 "core_count": 1 00:08:35.736 } 00:08:35.736 16:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1098542 00:08:35.737 16:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1098542 ']' 00:08:35.737 16:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1098542 00:08:35.737 16:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:35.737 16:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.737 16:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1098542 00:08:35.737 16:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:35.737 16:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:35.737 16:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1098542' 00:08:35.737 killing process with pid 1098542 00:08:35.737 16:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1098542 00:08:35.737 Received shutdown signal, test time was about 10.000000 seconds 00:08:35.737 00:08:35.737 Latency(us) 00:08:35.737 [2024-11-20T15:03:11.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.737 [2024-11-20T15:03:11.673Z] =================================================================================================================== 00:08:35.737 [2024-11-20T15:03:11.673Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:35.737 16:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1098542 00:08:35.737 16:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:35.997 16:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:36.256 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bec887-12e0-4062-88b3-d237fee7dfd0 00:08:36.256 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:36.516 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:36.516 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:36.516 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:36.516 [2024-11-20 16:03:12.398447] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:36.776 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bec887-12e0-4062-88b3-d237fee7dfd0 00:08:36.776 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:36.776 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bec887-12e0-4062-88b3-d237fee7dfd0 00:08:36.776 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.776 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.776 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.776 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.776 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.776 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.776 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.777 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:36.777 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bec887-12e0-4062-88b3-d237fee7dfd0 00:08:36.777 request: 00:08:36.777 { 00:08:36.777 "uuid": "e0bec887-12e0-4062-88b3-d237fee7dfd0", 00:08:36.777 "method": "bdev_lvol_get_lvstores", 00:08:36.777 "req_id": 1 00:08:36.777 } 00:08:36.777 Got JSON-RPC error response 00:08:36.777 response: 00:08:36.777 { 00:08:36.777 "code": -19, 00:08:36.777 "message": "No such device" 00:08:36.777 } 00:08:36.777 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:36.777 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:36.777 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:36.777 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:36.777 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:37.036 aio_bdev 00:08:37.036 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 03040703-d884-4f09-9ca3-72197179aec3 00:08:37.036 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=03040703-d884-4f09-9ca3-72197179aec3 00:08:37.036 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:37.036 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:37.036 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:37.036 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:37.036 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:37.036 16:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 03040703-d884-4f09-9ca3-72197179aec3 -t 2000 00:08:37.295 [ 00:08:37.295 { 00:08:37.295 "name": "03040703-d884-4f09-9ca3-72197179aec3", 00:08:37.295 "aliases": [ 00:08:37.295 "lvs/lvol" 00:08:37.295 ], 00:08:37.295 "product_name": "Logical Volume", 00:08:37.295 "block_size": 4096, 00:08:37.295 "num_blocks": 38912, 00:08:37.295 "uuid": "03040703-d884-4f09-9ca3-72197179aec3", 00:08:37.295 "assigned_rate_limits": { 00:08:37.295 "rw_ios_per_sec": 0, 00:08:37.295 "rw_mbytes_per_sec": 0, 00:08:37.295 "r_mbytes_per_sec": 0, 00:08:37.295 "w_mbytes_per_sec": 0 00:08:37.295 }, 00:08:37.295 "claimed": false, 00:08:37.295 "zoned": false, 00:08:37.296 "supported_io_types": { 00:08:37.296 "read": true, 00:08:37.296 "write": true, 00:08:37.296 "unmap": true, 00:08:37.296 "flush": false, 00:08:37.296 "reset": true, 00:08:37.296 "nvme_admin": false, 00:08:37.296 "nvme_io": false, 00:08:37.296 "nvme_io_md": false, 00:08:37.296 "write_zeroes": true, 00:08:37.296 "zcopy": false, 00:08:37.296 "get_zone_info": false, 00:08:37.296 "zone_management": false, 00:08:37.296 "zone_append": false, 00:08:37.296 "compare": false, 00:08:37.296 "compare_and_write": false, 00:08:37.296 "abort": false, 00:08:37.296 "seek_hole": true, 00:08:37.296 "seek_data": true, 00:08:37.296 "copy": false, 00:08:37.296 "nvme_iov_md": false 00:08:37.296 }, 00:08:37.296 "driver_specific": { 00:08:37.296 "lvol": { 00:08:37.296 "lvol_store_uuid": "e0bec887-12e0-4062-88b3-d237fee7dfd0", 00:08:37.296 "base_bdev": "aio_bdev", 00:08:37.296 "thin_provision": false, 00:08:37.296 "num_allocated_clusters": 38, 00:08:37.296 "snapshot": false, 00:08:37.296 "clone": false, 00:08:37.296 "esnap_clone": false 00:08:37.296 } 00:08:37.296 } 00:08:37.296 } 00:08:37.296 ] 00:08:37.296 16:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:37.296 16:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bec887-12e0-4062-88b3-d237fee7dfd0 00:08:37.296 16:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:37.555 16:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:37.555 16:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bec887-12e0-4062-88b3-d237fee7dfd0 00:08:37.555 16:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:37.556 16:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:37.556 16:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 03040703-d884-4f09-9ca3-72197179aec3 00:08:37.816 16:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e0bec887-12e0-4062-88b3-d237fee7dfd0 00:08:38.076 16:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:38.076 16:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:38.076 00:08:38.076 real 0m15.887s 00:08:38.076 user 0m15.610s 00:08:38.076 sys 0m1.369s 00:08:38.076 16:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.076 16:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:38.076 ************************************ 00:08:38.076 END TEST lvs_grow_clean 00:08:38.076 ************************************ 00:08:38.076 16:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:38.076 16:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:38.076 16:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.076 16:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:38.337 ************************************ 00:08:38.337 START TEST lvs_grow_dirty 00:08:38.337 ************************************ 00:08:38.337 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:38.337 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:38.337 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:38.337 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:38.337 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:38.337 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:38.337 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:38.337 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:38.337 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:38.337 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:38.338 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:38.338 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:38.597 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=66133df9-69bc-43cc-95a2-4978bda0fbad 00:08:38.597 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66133df9-69bc-43cc-95a2-4978bda0fbad 00:08:38.597 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:38.857 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:38.857 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:38.857 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 66133df9-69bc-43cc-95a2-4978bda0fbad lvol 150 00:08:38.857 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=bf065677-cd14-4ff0-8c69-2a97c387275b 00:08:38.857 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:38.857 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:39.117 [2024-11-20 16:03:14.938508] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:39.117 [2024-11-20 16:03:14.938553] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:39.117 true 00:08:39.117 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66133df9-69bc-43cc-95a2-4978bda0fbad 00:08:39.117 16:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:39.377 16:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:39.377 16:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:39.377 16:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bf065677-cd14-4ff0-8c69-2a97c387275b 00:08:39.637 16:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:39.897 [2024-11-20 16:03:15.608468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.897 16:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:39.897 16:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1102239 00:08:39.897 16:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:39.897 16:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:39.897 16:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1102239 /var/tmp/bdevperf.sock 00:08:39.897 16:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1102239 ']' 00:08:39.897 16:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:39.897 16:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.897 16:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:39.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:39.897 16:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.897 16:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:40.157 [2024-11-20 16:03:15.840875] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:08:40.157 [2024-11-20 16:03:15.840927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1102239 ] 00:08:40.157 [2024-11-20 16:03:15.924711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.157 [2024-11-20 16:03:15.954558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.729 16:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.729 16:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:40.729 16:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:41.300 Nvme0n1 00:08:41.300 16:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:41.560 [ 00:08:41.560 { 00:08:41.560 "name": "Nvme0n1", 00:08:41.560 "aliases": [ 00:08:41.560 "bf065677-cd14-4ff0-8c69-2a97c387275b" 00:08:41.560 ], 00:08:41.560 "product_name": "NVMe disk", 00:08:41.560 "block_size": 4096, 00:08:41.560 "num_blocks": 38912, 00:08:41.560 "uuid": "bf065677-cd14-4ff0-8c69-2a97c387275b", 00:08:41.560 "numa_id": 0, 00:08:41.560 "assigned_rate_limits": { 00:08:41.560 "rw_ios_per_sec": 0, 00:08:41.560 "rw_mbytes_per_sec": 0, 00:08:41.560 "r_mbytes_per_sec": 0, 00:08:41.560 "w_mbytes_per_sec": 0 00:08:41.560 }, 00:08:41.560 "claimed": false, 00:08:41.560 "zoned": false, 00:08:41.561 "supported_io_types": { 00:08:41.561 "read": true, 00:08:41.561 "write": true, 00:08:41.561 "unmap": true, 00:08:41.561 "flush": true, 00:08:41.561 "reset": true, 00:08:41.561 "nvme_admin": true, 00:08:41.561 "nvme_io": true, 00:08:41.561 "nvme_io_md": false, 00:08:41.561 "write_zeroes": true, 00:08:41.561 "zcopy": false, 00:08:41.561 "get_zone_info": false, 00:08:41.561 "zone_management": false, 00:08:41.561 "zone_append": false, 00:08:41.561 "compare": true, 00:08:41.561 "compare_and_write": true, 00:08:41.561 "abort": true, 00:08:41.561 "seek_hole": false, 00:08:41.561 "seek_data": false, 00:08:41.561 "copy": true, 00:08:41.561 "nvme_iov_md": false 00:08:41.561 }, 00:08:41.561 "memory_domains": [ 00:08:41.561 { 00:08:41.561 "dma_device_id": "system", 00:08:41.561 "dma_device_type": 1 00:08:41.561 } 00:08:41.561 ], 00:08:41.561 "driver_specific": { 00:08:41.561 "nvme": [ 00:08:41.561 { 00:08:41.561 "trid": { 00:08:41.561 "trtype": "TCP", 00:08:41.561 "adrfam": "IPv4", 00:08:41.561 "traddr": "10.0.0.2", 00:08:41.561 "trsvcid": "4420", 00:08:41.561 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:41.561 }, 00:08:41.561 "ctrlr_data": { 00:08:41.561 "cntlid": 1, 00:08:41.561 "vendor_id": "0x8086", 00:08:41.561 "model_number": "SPDK bdev Controller", 00:08:41.561 "serial_number": "SPDK0", 00:08:41.561 "firmware_revision": "25.01", 00:08:41.561 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:41.561 "oacs": { 00:08:41.561 "security": 0, 00:08:41.561 "format": 0, 00:08:41.561 "firmware": 0, 00:08:41.561 "ns_manage": 0 00:08:41.561 }, 00:08:41.561 "multi_ctrlr": true, 00:08:41.561 "ana_reporting": false 00:08:41.561 }, 00:08:41.561 "vs": { 00:08:41.561 "nvme_version": "1.3" 00:08:41.561 }, 00:08:41.561 "ns_data": { 00:08:41.561 "id": 1, 00:08:41.561 "can_share": true 00:08:41.561 } 00:08:41.561 } 00:08:41.561 ], 00:08:41.561 "mp_policy": "active_passive" 00:08:41.561 } 00:08:41.561 } 00:08:41.561 ] 00:08:41.561 16:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1102549 00:08:41.561 16:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:41.561 16:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:41.561 Running I/O for 10 seconds... 00:08:42.504 Latency(us) 00:08:42.504 [2024-11-20T15:03:18.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.504 Nvme0n1 : 1.00 24978.00 97.57 0.00 0.00 0.00 0.00 0.00 00:08:42.504 [2024-11-20T15:03:18.440Z] =================================================================================================================== 00:08:42.504 [2024-11-20T15:03:18.440Z] Total : 24978.00 97.57 0.00 0.00 0.00 0.00 0.00 00:08:42.504 00:08:43.446 16:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 66133df9-69bc-43cc-95a2-4978bda0fbad 00:08:43.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.446 Nvme0n1 : 2.00 25138.50 98.20 0.00 0.00 0.00 0.00 0.00 00:08:43.446 [2024-11-20T15:03:19.382Z] =================================================================================================================== 00:08:43.446 [2024-11-20T15:03:19.382Z] Total : 25138.50 98.20 0.00 0.00 0.00 0.00 0.00 00:08:43.446 00:08:43.706 true 00:08:43.706 16:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66133df9-69bc-43cc-95a2-4978bda0fbad 00:08:43.706 16:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:43.706 16:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:43.706 16:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:43.706 16:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1102549 00:08:44.648 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.648 Nvme0n1 : 3.00 25216.67 98.50 0.00 0.00 0.00 0.00 0.00 00:08:44.648 [2024-11-20T15:03:20.584Z] =================================================================================================================== 00:08:44.648 [2024-11-20T15:03:20.584Z] Total : 25216.67 98.50 0.00 0.00 0.00 0.00 0.00 00:08:44.648 00:08:45.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.592 Nvme0n1 : 4.00 25277.00 98.74 0.00 0.00 0.00 0.00 0.00 00:08:45.592 [2024-11-20T15:03:21.528Z] =================================================================================================================== 00:08:45.592 [2024-11-20T15:03:21.528Z] Total : 25277.00 98.74 0.00 0.00 0.00 0.00 0.00 00:08:45.592 00:08:46.533 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.533 Nvme0n1 : 5.00 25326.00 98.93 0.00 0.00 0.00 0.00 0.00 00:08:46.533 [2024-11-20T15:03:22.469Z] =================================================================================================================== 00:08:46.533 [2024-11-20T15:03:22.469Z] Total : 25326.00 98.93 0.00 0.00 0.00 0.00 0.00 00:08:46.533 00:08:47.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.481 Nvme0n1 : 6.00 25358.33 99.06 0.00 0.00 0.00 0.00 0.00 00:08:47.481 [2024-11-20T15:03:23.417Z] =================================================================================================================== 00:08:47.481 [2024-11-20T15:03:23.417Z] Total : 25358.33 99.06 0.00 0.00 0.00 0.00 0.00 00:08:47.481 00:08:48.864 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.864 Nvme0n1 : 7.00 25382.43 99.15 0.00 0.00 0.00 0.00 0.00 00:08:48.864 [2024-11-20T15:03:24.800Z] =================================================================================================================== 00:08:48.864 [2024-11-20T15:03:24.800Z] Total : 25382.43 99.15 0.00 0.00 0.00 0.00 0.00 00:08:48.864 00:08:49.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.435 Nvme0n1 : 8.00 25407.12 99.25 0.00 0.00 0.00 0.00 0.00 00:08:49.435 [2024-11-20T15:03:25.371Z] =================================================================================================================== 00:08:49.435 [2024-11-20T15:03:25.371Z] Total : 25407.12 99.25 0.00 0.00 0.00 0.00 0.00 00:08:49.435 00:08:50.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.819 Nvme0n1 : 9.00 25420.89 99.30 0.00 0.00 0.00 0.00 0.00 00:08:50.819 [2024-11-20T15:03:26.755Z] =================================================================================================================== 00:08:50.819 [2024-11-20T15:03:26.755Z] Total : 25420.89 99.30 0.00 0.00 0.00 0.00 0.00 00:08:50.819 00:08:51.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.761 Nvme0n1 : 10.00 25431.70 99.34 0.00 0.00 0.00 0.00 0.00 00:08:51.761 [2024-11-20T15:03:27.697Z] =================================================================================================================== 00:08:51.761 [2024-11-20T15:03:27.697Z] Total : 25431.70 99.34 0.00 0.00 0.00 0.00 0.00 00:08:51.761 00:08:51.761 00:08:51.761 Latency(us) 00:08:51.761 [2024-11-20T15:03:27.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.761 Nvme0n1 : 10.00 25433.16 99.35 0.00 0.00 5029.94 3058.35 11141.12 00:08:51.761 [2024-11-20T15:03:27.697Z] =================================================================================================================== 00:08:51.761 [2024-11-20T15:03:27.697Z] Total : 25433.16 99.35 0.00 0.00 5029.94 3058.35 11141.12 00:08:51.761 { 00:08:51.761 "results": [ 00:08:51.761 { 00:08:51.761 "job": "Nvme0n1", 00:08:51.761 "core_mask": "0x2", 00:08:51.761 "workload": "randwrite", 00:08:51.761 "status": "finished", 00:08:51.761 "queue_depth": 128, 00:08:51.761 "io_size": 4096, 00:08:51.761 "runtime": 10.004457, 00:08:51.761 "iops": 25433.16443860971, 00:08:51.761 "mibps": 99.34829858831918, 00:08:51.761 "io_failed": 0, 00:08:51.761 "io_timeout": 0, 00:08:51.761 "avg_latency_us": 5029.93817141884, 00:08:51.761 "min_latency_us": 3058.346666666667, 00:08:51.761 "max_latency_us": 11141.12 00:08:51.761 } 00:08:51.761 ], 00:08:51.761 "core_count": 1 00:08:51.761 } 00:08:51.761 16:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1102239 00:08:51.761 16:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1102239 ']' 00:08:51.761 16:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1102239 00:08:51.761 16:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:51.761 16:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.761 16:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1102239 00:08:51.761 16:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:51.761 16:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:51.761 16:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1102239' 00:08:51.761 killing process with pid 1102239 00:08:51.761 16:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1102239 00:08:51.761 Received shutdown signal, test time was about 10.000000 seconds 00:08:51.761 00:08:51.761 Latency(us) 00:08:51.761 [2024-11-20T15:03:27.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.761 [2024-11-20T15:03:27.697Z] =================================================================================================================== 00:08:51.761 [2024-11-20T15:03:27.697Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:51.761 16:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1102239 00:08:51.761 16:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:52.021 16:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:52.021 16:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66133df9-69bc-43cc-95a2-4978bda0fbad 00:08:52.021 16:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:52.283 16:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:52.283 16:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:52.283 16:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1098123 00:08:52.283 16:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1098123 00:08:52.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1098123 Killed "${NVMF_APP[@]}" "$@" 00:08:52.283 16:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:52.283 16:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:52.283 16:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:52.283 16:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:52.283 16:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:52.283 16:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1104810 00:08:52.283 16:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1104810 00:08:52.283 16:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:52.283 16:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1104810 ']' 00:08:52.283 16:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.283 16:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.283 16:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.283 16:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.283 16:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:52.283 [2024-11-20 16:03:28.215905] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:08:52.283 [2024-11-20 16:03:28.215963] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.544 [2024-11-20 16:03:28.310040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.544 [2024-11-20 16:03:28.341117] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.544 [2024-11-20 16:03:28.341145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.544 [2024-11-20 16:03:28.341150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.544 [2024-11-20 16:03:28.341155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.544 [2024-11-20 16:03:28.341164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.544 [2024-11-20 16:03:28.341660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.115 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.115 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:53.115 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:53.115 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:53.115 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:53.375 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.375 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:53.375 [2024-11-20 16:03:29.207702] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:53.375 [2024-11-20 16:03:29.207777] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:53.375 [2024-11-20 16:03:29.207799] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:53.375 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:53.375 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev bf065677-cd14-4ff0-8c69-2a97c387275b 00:08:53.375 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=bf065677-cd14-4ff0-8c69-2a97c387275b 00:08:53.375 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.375 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:53.375 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.375 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.375 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:53.636 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bf065677-cd14-4ff0-8c69-2a97c387275b -t 2000 00:08:53.636 [ 00:08:53.636 { 00:08:53.636 "name": "bf065677-cd14-4ff0-8c69-2a97c387275b", 00:08:53.636 "aliases": [ 00:08:53.636 "lvs/lvol" 00:08:53.636 ], 00:08:53.636 "product_name": "Logical Volume", 00:08:53.636 "block_size": 4096, 00:08:53.636 "num_blocks": 38912, 00:08:53.636 "uuid": "bf065677-cd14-4ff0-8c69-2a97c387275b", 00:08:53.636 "assigned_rate_limits": { 00:08:53.636 "rw_ios_per_sec": 0, 00:08:53.636 "rw_mbytes_per_sec": 0, 00:08:53.636 "r_mbytes_per_sec": 0, 00:08:53.636 "w_mbytes_per_sec": 0 00:08:53.636 }, 00:08:53.636 "claimed": false, 00:08:53.636 "zoned": false, 00:08:53.636 "supported_io_types": { 00:08:53.636 "read": true, 00:08:53.636 "write": true, 00:08:53.636 "unmap": true, 00:08:53.636 "flush": false, 00:08:53.636 "reset": true, 00:08:53.636 "nvme_admin": false, 00:08:53.636 "nvme_io": false, 00:08:53.636 "nvme_io_md": false, 00:08:53.636 "write_zeroes": true, 00:08:53.636 "zcopy": false, 00:08:53.636 "get_zone_info": false, 00:08:53.636 "zone_management": false, 00:08:53.636 "zone_append": false, 00:08:53.636 "compare": false, 00:08:53.636 "compare_and_write": false, 00:08:53.636 "abort": false, 00:08:53.636 "seek_hole": true, 00:08:53.636 "seek_data": true, 00:08:53.636 "copy": false, 00:08:53.636 "nvme_iov_md": false 00:08:53.636 }, 00:08:53.636 "driver_specific": { 00:08:53.636 "lvol": { 00:08:53.636 "lvol_store_uuid": "66133df9-69bc-43cc-95a2-4978bda0fbad", 00:08:53.636 "base_bdev": "aio_bdev", 00:08:53.636 "thin_provision": false, 00:08:53.636 "num_allocated_clusters": 38, 00:08:53.636 "snapshot": false, 00:08:53.636 "clone": false, 00:08:53.636 "esnap_clone": false 00:08:53.636 } 00:08:53.636 } 00:08:53.636 } 00:08:53.636 ] 00:08:53.897 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:53.897 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66133df9-69bc-43cc-95a2-4978bda0fbad 00:08:53.897 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:53.897 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:53.897 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66133df9-69bc-43cc-95a2-4978bda0fbad 00:08:53.897 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:54.158 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:54.158 16:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:54.158 [2024-11-20 16:03:30.064408] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:54.419 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66133df9-69bc-43cc-95a2-4978bda0fbad 00:08:54.419 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:54.419 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66133df9-69bc-43cc-95a2-4978bda0fbad 00:08:54.419 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:54.419 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:54.419 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:54.419 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:54.419 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:54.419 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:54.419 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:54.419 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:54.419 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66133df9-69bc-43cc-95a2-4978bda0fbad 00:08:54.419 request: 00:08:54.419 { 00:08:54.419 "uuid": "66133df9-69bc-43cc-95a2-4978bda0fbad", 00:08:54.419 "method": "bdev_lvol_get_lvstores", 00:08:54.419 "req_id": 1 00:08:54.419 } 00:08:54.419 Got JSON-RPC error response 00:08:54.419 response: 00:08:54.419 { 00:08:54.419 "code": -19, 00:08:54.419 "message": "No such device" 00:08:54.419 } 00:08:54.419 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:54.419 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:54.419 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:54.419 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:54.419 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:54.680 aio_bdev 00:08:54.680 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bf065677-cd14-4ff0-8c69-2a97c387275b 00:08:54.680 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=bf065677-cd14-4ff0-8c69-2a97c387275b 00:08:54.680 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.680 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:54.680 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.680 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.680 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:54.941 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bf065677-cd14-4ff0-8c69-2a97c387275b -t 2000 00:08:54.941 [ 00:08:54.941 { 00:08:54.941 "name": "bf065677-cd14-4ff0-8c69-2a97c387275b", 00:08:54.941 "aliases": [ 00:08:54.941 "lvs/lvol" 00:08:54.941 ], 00:08:54.941 "product_name": "Logical Volume", 00:08:54.941 "block_size": 4096, 00:08:54.941 "num_blocks": 38912, 00:08:54.941 "uuid": "bf065677-cd14-4ff0-8c69-2a97c387275b", 00:08:54.941 "assigned_rate_limits": { 00:08:54.941 "rw_ios_per_sec": 0, 00:08:54.941 "rw_mbytes_per_sec": 0, 00:08:54.941 "r_mbytes_per_sec": 0, 00:08:54.941 "w_mbytes_per_sec": 0 00:08:54.941 }, 00:08:54.941 "claimed": false, 00:08:54.941 "zoned": false, 00:08:54.941 "supported_io_types": { 00:08:54.941 "read": true, 00:08:54.941 "write": true, 00:08:54.941 "unmap": true, 00:08:54.941 "flush": false, 00:08:54.941 "reset": true, 00:08:54.941 "nvme_admin": false, 00:08:54.941 "nvme_io": false, 00:08:54.941 "nvme_io_md": false, 00:08:54.941 "write_zeroes": true, 00:08:54.941 "zcopy": false, 00:08:54.941 "get_zone_info": false, 00:08:54.941 "zone_management": false, 00:08:54.941 "zone_append": false, 00:08:54.941 "compare": false, 00:08:54.941 "compare_and_write": false, 00:08:54.941 "abort": false, 00:08:54.941 "seek_hole": true, 00:08:54.941 "seek_data": true, 00:08:54.941 "copy": false, 00:08:54.941 "nvme_iov_md": false 00:08:54.941 }, 00:08:54.941 "driver_specific": { 00:08:54.941 "lvol": { 00:08:54.941 "lvol_store_uuid": "66133df9-69bc-43cc-95a2-4978bda0fbad", 00:08:54.941 "base_bdev": "aio_bdev", 00:08:54.941 "thin_provision": false, 00:08:54.941 "num_allocated_clusters": 38, 00:08:54.941 "snapshot": false, 00:08:54.941 "clone": false, 00:08:54.941 "esnap_clone": false 00:08:54.941 } 00:08:54.941 } 00:08:54.941 } 00:08:54.941 ] 00:08:54.941 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:54.941 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66133df9-69bc-43cc-95a2-4978bda0fbad 00:08:54.941 16:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:55.202 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:55.202 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66133df9-69bc-43cc-95a2-4978bda0fbad 00:08:55.202 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:55.463 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:55.463 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bf065677-cd14-4ff0-8c69-2a97c387275b 00:08:55.463 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 66133df9-69bc-43cc-95a2-4978bda0fbad 00:08:55.723 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:55.984 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:55.984 00:08:55.984 real 0m17.743s 00:08:55.984 user 0m45.538s 00:08:55.984 sys 0m3.061s 00:08:55.984 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.984 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:55.984 ************************************ 00:08:55.984 END TEST lvs_grow_dirty 00:08:55.984 ************************************ 00:08:55.984 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:55.984 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:55.984 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:55.984 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:55.984 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:55.984 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:55.984 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:55.984 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:55.984 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:55.984 nvmf_trace.0 00:08:55.984 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:55.985 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:55.985 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:55.985 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:55.985 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:55.985 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:55.985 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:55.985 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:55.985 rmmod nvme_tcp 00:08:55.985 rmmod nvme_fabrics 00:08:55.985 rmmod nvme_keyring 00:08:56.247 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:56.247 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:56.247 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:56.247 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1104810 ']' 00:08:56.247 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1104810 00:08:56.247 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1104810 ']' 00:08:56.247 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1104810 00:08:56.247 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:56.247 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.247 16:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1104810 00:08:56.247 16:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.247 16:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.247 16:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1104810' 00:08:56.247 killing process with pid 1104810 00:08:56.247 16:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1104810 00:08:56.247 16:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1104810 00:08:56.247 16:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:56.247 16:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:56.247 16:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:56.247 16:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:56.247 16:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:56.247 16:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:56.247 16:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:56.247 16:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:56.247 16:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:56.247 16:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.247 16:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.247 16:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.795 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:58.796 00:08:58.796 real 0m44.500s 00:08:58.796 user 1m7.536s 00:08:58.796 sys 0m10.586s 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:58.796 ************************************ 00:08:58.796 END TEST nvmf_lvs_grow 00:08:58.796 ************************************ 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:58.796 ************************************ 00:08:58.796 START TEST nvmf_bdev_io_wait 00:08:58.796 ************************************ 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:58.796 * Looking for test storage... 00:08:58.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:58.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.796 --rc genhtml_branch_coverage=1 00:08:58.796 --rc genhtml_function_coverage=1 00:08:58.796 --rc genhtml_legend=1 00:08:58.796 --rc geninfo_all_blocks=1 00:08:58.796 --rc geninfo_unexecuted_blocks=1 00:08:58.796 00:08:58.796 ' 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:58.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.796 --rc genhtml_branch_coverage=1 00:08:58.796 --rc genhtml_function_coverage=1 00:08:58.796 --rc genhtml_legend=1 00:08:58.796 --rc geninfo_all_blocks=1 00:08:58.796 --rc geninfo_unexecuted_blocks=1 00:08:58.796 00:08:58.796 ' 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:58.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.796 --rc genhtml_branch_coverage=1 00:08:58.796 --rc genhtml_function_coverage=1 00:08:58.796 --rc genhtml_legend=1 00:08:58.796 --rc geninfo_all_blocks=1 00:08:58.796 --rc geninfo_unexecuted_blocks=1 00:08:58.796 00:08:58.796 ' 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:58.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.796 --rc genhtml_branch_coverage=1 00:08:58.796 --rc genhtml_function_coverage=1 00:08:58.796 --rc genhtml_legend=1 00:08:58.796 --rc geninfo_all_blocks=1 00:08:58.796 --rc geninfo_unexecuted_blocks=1 00:08:58.796 00:08:58.796 ' 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.796 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:58.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:58.797 16:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:06.948 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:06.948 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:06.948 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:06.948 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:06.948 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:06.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:09:06.949 00:09:06.949 --- 10.0.0.2 ping statistics --- 00:09:06.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.949 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:06.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:09:06.949 00:09:06.949 --- 10.0.0.1 ping statistics --- 00:09:06.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.949 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1109833 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1109833 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1109833 ']' 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.949 16:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.949 [2024-11-20 16:03:42.043453] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:09:06.949 [2024-11-20 16:03:42.043519] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.949 [2024-11-20 16:03:42.145663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.949 [2024-11-20 16:03:42.200434] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.949 [2024-11-20 16:03:42.200486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.949 [2024-11-20 16:03:42.200496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.949 [2024-11-20 16:03:42.200503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.949 [2024-11-20 16:03:42.200509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.949 [2024-11-20 16:03:42.202581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.949 [2024-11-20 16:03:42.202721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.949 [2024-11-20 16:03:42.202880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.949 [2024-11-20 16:03:42.202882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.255 16:03:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.255 16:03:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:07.255 16:03:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:07.255 16:03:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:07.255 16:03:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.255 16:03:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.255 16:03:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:07.255 16:03:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.255 16:03:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.255 16:03:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.255 16:03:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:07.255 16:03:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.255 16:03:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.255 16:03:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.255 16:03:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:07.255 16:03:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.255 16:03:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.255 [2024-11-20 16:03:43.002061] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.255 Malloc0 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.255 [2024-11-20 16:03:43.067523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1110025 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1110027 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:07.255 { 00:09:07.255 "params": { 00:09:07.255 "name": "Nvme$subsystem", 00:09:07.255 "trtype": "$TEST_TRANSPORT", 00:09:07.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:07.255 "adrfam": "ipv4", 00:09:07.255 "trsvcid": "$NVMF_PORT", 00:09:07.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:07.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:07.255 "hdgst": ${hdgst:-false}, 00:09:07.255 "ddgst": ${ddgst:-false} 00:09:07.255 }, 00:09:07.255 "method": "bdev_nvme_attach_controller" 00:09:07.255 } 00:09:07.255 EOF 00:09:07.255 )") 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1110029 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:07.255 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1110032 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:07.256 { 00:09:07.256 "params": { 00:09:07.256 "name": "Nvme$subsystem", 00:09:07.256 "trtype": "$TEST_TRANSPORT", 00:09:07.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:07.256 "adrfam": "ipv4", 00:09:07.256 "trsvcid": "$NVMF_PORT", 00:09:07.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:07.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:07.256 "hdgst": ${hdgst:-false}, 00:09:07.256 "ddgst": ${ddgst:-false} 00:09:07.256 }, 00:09:07.256 "method": "bdev_nvme_attach_controller" 00:09:07.256 } 00:09:07.256 EOF 00:09:07.256 )") 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:07.256 { 00:09:07.256 "params": { 00:09:07.256 "name": "Nvme$subsystem", 00:09:07.256 "trtype": "$TEST_TRANSPORT", 00:09:07.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:07.256 "adrfam": "ipv4", 00:09:07.256 "trsvcid": "$NVMF_PORT", 00:09:07.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:07.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:07.256 "hdgst": ${hdgst:-false}, 00:09:07.256 "ddgst": ${ddgst:-false} 00:09:07.256 }, 00:09:07.256 "method": "bdev_nvme_attach_controller" 00:09:07.256 } 00:09:07.256 EOF 00:09:07.256 )") 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:07.256 { 00:09:07.256 "params": { 00:09:07.256 "name": "Nvme$subsystem", 00:09:07.256 "trtype": "$TEST_TRANSPORT", 00:09:07.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:07.256 "adrfam": "ipv4", 00:09:07.256 "trsvcid": "$NVMF_PORT", 00:09:07.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:07.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:07.256 "hdgst": ${hdgst:-false}, 00:09:07.256 "ddgst": ${ddgst:-false} 00:09:07.256 }, 00:09:07.256 "method": "bdev_nvme_attach_controller" 00:09:07.256 } 00:09:07.256 EOF 00:09:07.256 )") 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1110025 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:07.256 "params": { 00:09:07.256 "name": "Nvme1", 00:09:07.256 "trtype": "tcp", 00:09:07.256 "traddr": "10.0.0.2", 00:09:07.256 "adrfam": "ipv4", 00:09:07.256 "trsvcid": "4420", 00:09:07.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:07.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:07.256 "hdgst": false, 00:09:07.256 "ddgst": false 00:09:07.256 }, 00:09:07.256 "method": "bdev_nvme_attach_controller" 00:09:07.256 }' 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:07.256 "params": { 00:09:07.256 "name": "Nvme1", 00:09:07.256 "trtype": "tcp", 00:09:07.256 "traddr": "10.0.0.2", 00:09:07.256 "adrfam": "ipv4", 00:09:07.256 "trsvcid": "4420", 00:09:07.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:07.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:07.256 "hdgst": false, 00:09:07.256 "ddgst": false 00:09:07.256 }, 00:09:07.256 "method": "bdev_nvme_attach_controller" 00:09:07.256 }' 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:07.256 "params": { 00:09:07.256 "name": "Nvme1", 00:09:07.256 "trtype": "tcp", 00:09:07.256 "traddr": "10.0.0.2", 00:09:07.256 "adrfam": "ipv4", 00:09:07.256 "trsvcid": "4420", 00:09:07.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:07.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:07.256 "hdgst": false, 00:09:07.256 "ddgst": false 00:09:07.256 }, 00:09:07.256 "method": "bdev_nvme_attach_controller" 00:09:07.256 }' 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:07.256 16:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:07.256 "params": { 00:09:07.256 "name": "Nvme1", 00:09:07.256 "trtype": "tcp", 00:09:07.256 "traddr": "10.0.0.2", 00:09:07.256 "adrfam": "ipv4", 00:09:07.256 "trsvcid": "4420", 00:09:07.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:07.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:07.256 "hdgst": false, 00:09:07.256 "ddgst": false 00:09:07.256 }, 00:09:07.256 "method": "bdev_nvme_attach_controller" 00:09:07.256 }' 00:09:07.256 [2024-11-20 16:03:43.127925] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:09:07.256 [2024-11-20 16:03:43.127994] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:07.256 [2024-11-20 16:03:43.129181] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:09:07.256 [2024-11-20 16:03:43.129255] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:07.256 [2024-11-20 16:03:43.137754] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:09:07.256 [2024-11-20 16:03:43.137864] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:07.256 [2024-11-20 16:03:43.140692] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:09:07.256 [2024-11-20 16:03:43.140775] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:07.559 [2024-11-20 16:03:43.325101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.559 [2024-11-20 16:03:43.365618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:07.559 [2024-11-20 16:03:43.388803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.559 [2024-11-20 16:03:43.427227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:07.559 [2024-11-20 16:03:43.481805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.824 [2024-11-20 16:03:43.519924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:07.824 [2024-11-20 16:03:43.549195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.824 [2024-11-20 16:03:43.588300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:07.824 Running I/O for 1 seconds... 00:09:08.085 Running I/O for 1 seconds... 00:09:08.085 Running I/O for 1 seconds... 00:09:08.085 Running I/O for 1 seconds... 00:09:09.027 6898.00 IOPS, 26.95 MiB/s 00:09:09.027 Latency(us) 00:09:09.027 [2024-11-20T15:03:44.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.027 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:09.027 Nvme1n1 : 1.02 6927.94 27.06 0.00 0.00 18368.96 7536.64 25449.81 00:09:09.027 [2024-11-20T15:03:44.963Z] =================================================================================================================== 00:09:09.027 [2024-11-20T15:03:44.963Z] Total : 6927.94 27.06 0.00 0.00 18368.96 7536.64 25449.81 00:09:09.027 180904.00 IOPS, 706.66 MiB/s 00:09:09.027 Latency(us) 00:09:09.027 [2024-11-20T15:03:44.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.027 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:09.027 Nvme1n1 : 1.00 180539.19 705.23 0.00 0.00 704.70 307.20 1979.73 00:09:09.027 [2024-11-20T15:03:44.963Z] =================================================================================================================== 00:09:09.027 [2024-11-20T15:03:44.963Z] Total : 180539.19 705.23 0.00 0.00 704.70 307.20 1979.73 00:09:09.027 6629.00 IOPS, 25.89 MiB/s 00:09:09.027 Latency(us) 00:09:09.027 [2024-11-20T15:03:44.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.027 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:09.027 Nvme1n1 : 1.01 6716.45 26.24 0.00 0.00 18991.61 5324.80 32549.55 00:09:09.027 [2024-11-20T15:03:44.963Z] =================================================================================================================== 00:09:09.027 [2024-11-20T15:03:44.963Z] Total : 6716.45 26.24 0.00 0.00 18991.61 5324.80 32549.55 00:09:09.027 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1110027 00:09:09.027 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1110029 00:09:09.027 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1110032 00:09:09.027 11953.00 IOPS, 46.69 MiB/s 00:09:09.027 Latency(us) 00:09:09.027 [2024-11-20T15:03:44.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.027 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:09.027 Nvme1n1 : 1.01 12026.34 46.98 0.00 0.00 10610.33 4478.29 19660.80 00:09:09.028 [2024-11-20T15:03:44.964Z] =================================================================================================================== 00:09:09.028 [2024-11-20T15:03:44.964Z] Total : 12026.34 46.98 0.00 0.00 10610.33 4478.29 19660.80 00:09:09.288 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:09.288 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.288 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.288 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.288 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:09.289 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:09.289 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:09.289 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:09.289 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:09.289 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:09.289 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:09.289 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:09.289 rmmod nvme_tcp 00:09:09.289 rmmod nvme_fabrics 00:09:09.289 rmmod nvme_keyring 00:09:09.289 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:09.289 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:09.289 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:09.289 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1109833 ']' 00:09:09.289 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1109833 00:09:09.289 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1109833 ']' 00:09:09.289 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1109833 00:09:09.289 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:09.289 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.289 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1109833 00:09:09.289 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:09.289 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:09.289 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1109833' 00:09:09.289 killing process with pid 1109833 00:09:09.289 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1109833 00:09:09.289 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1109833 00:09:09.551 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:09.551 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:09.551 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:09.551 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:09.551 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:09.551 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:09.551 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:09.551 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:09.551 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:09.551 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.551 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.551 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:12.100 00:09:12.100 real 0m13.157s 00:09:12.100 user 0m20.301s 00:09:12.100 sys 0m7.350s 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.100 ************************************ 00:09:12.100 END TEST nvmf_bdev_io_wait 00:09:12.100 ************************************ 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:12.100 ************************************ 00:09:12.100 START TEST nvmf_queue_depth 00:09:12.100 ************************************ 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:12.100 * Looking for test storage... 00:09:12.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.100 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:12.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.101 --rc genhtml_branch_coverage=1 00:09:12.101 --rc genhtml_function_coverage=1 00:09:12.101 --rc genhtml_legend=1 00:09:12.101 --rc geninfo_all_blocks=1 00:09:12.101 --rc geninfo_unexecuted_blocks=1 00:09:12.101 00:09:12.101 ' 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:12.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.101 --rc genhtml_branch_coverage=1 00:09:12.101 --rc genhtml_function_coverage=1 00:09:12.101 --rc genhtml_legend=1 00:09:12.101 --rc geninfo_all_blocks=1 00:09:12.101 --rc geninfo_unexecuted_blocks=1 00:09:12.101 00:09:12.101 ' 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:12.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.101 --rc genhtml_branch_coverage=1 00:09:12.101 --rc genhtml_function_coverage=1 00:09:12.101 --rc genhtml_legend=1 00:09:12.101 --rc geninfo_all_blocks=1 00:09:12.101 --rc geninfo_unexecuted_blocks=1 00:09:12.101 00:09:12.101 ' 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:12.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.101 --rc genhtml_branch_coverage=1 00:09:12.101 --rc genhtml_function_coverage=1 00:09:12.101 --rc genhtml_legend=1 00:09:12.101 --rc geninfo_all_blocks=1 00:09:12.101 --rc geninfo_unexecuted_blocks=1 00:09:12.101 00:09:12.101 ' 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.101 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:12.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:12.102 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:12.102 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:12.102 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:12.102 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:12.102 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:12.102 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:12.102 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:12.102 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:12.102 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.102 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:12.102 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:12.102 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:12.102 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.102 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.102 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.102 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:12.102 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:12.102 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:12.102 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.246 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:20.246 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:20.246 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:20.246 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:20.246 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:20.246 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:20.246 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:20.246 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:20.246 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:20.246 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:20.247 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:20.247 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:20.247 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:20.247 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:20.247 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:20.247 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:20.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:09:20.248 00:09:20.248 --- 10.0.0.2 ping statistics --- 00:09:20.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.248 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:20.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:09:20.248 00:09:20.248 --- 10.0.0.1 ping statistics --- 00:09:20.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.248 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1114739 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1114739 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1114739 ']' 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.248 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.248 [2024-11-20 16:03:55.352506] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:09:20.248 [2024-11-20 16:03:55.352573] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.248 [2024-11-20 16:03:55.458046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.248 [2024-11-20 16:03:55.506992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.248 [2024-11-20 16:03:55.507039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.248 [2024-11-20 16:03:55.507048] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.248 [2024-11-20 16:03:55.507061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.248 [2024-11-20 16:03:55.507067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.248 [2024-11-20 16:03:55.507799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.248 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.248 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:20.248 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:20.248 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:20.248 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.509 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.509 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.510 [2024-11-20 16:03:56.210790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.510 Malloc0 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.510 [2024-11-20 16:03:56.272032] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1114995 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1114995 /var/tmp/bdevperf.sock 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1114995 ']' 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:20.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.510 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.510 [2024-11-20 16:03:56.339296] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:09:20.510 [2024-11-20 16:03:56.339375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1114995 ] 00:09:20.510 [2024-11-20 16:03:56.431669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.770 [2024-11-20 16:03:56.484892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.342 16:03:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.342 16:03:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:21.342 16:03:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:21.342 16:03:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.342 16:03:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.602 NVMe0n1 00:09:21.602 16:03:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.602 16:03:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:21.602 Running I/O for 10 seconds... 00:09:23.924 8673.00 IOPS, 33.88 MiB/s [2024-11-20T15:04:00.801Z] 10228.00 IOPS, 39.95 MiB/s [2024-11-20T15:04:01.740Z] 10774.00 IOPS, 42.09 MiB/s [2024-11-20T15:04:02.679Z] 11261.00 IOPS, 43.99 MiB/s [2024-11-20T15:04:03.619Z] 11670.20 IOPS, 45.59 MiB/s [2024-11-20T15:04:04.559Z] 11953.83 IOPS, 46.69 MiB/s [2024-11-20T15:04:05.501Z] 12268.29 IOPS, 47.92 MiB/s [2024-11-20T15:04:06.883Z] 12433.62 IOPS, 48.57 MiB/s [2024-11-20T15:04:07.825Z] 12615.44 IOPS, 49.28 MiB/s [2024-11-20T15:04:07.825Z] 12737.70 IOPS, 49.76 MiB/s 00:09:31.889 Latency(us) 00:09:31.889 [2024-11-20T15:04:07.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.889 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:31.889 Verification LBA range: start 0x0 length 0x4000 00:09:31.889 NVMe0n1 : 10.05 12766.52 49.87 0.00 0.00 79901.27 12888.75 75584.85 00:09:31.889 [2024-11-20T15:04:07.825Z] =================================================================================================================== 00:09:31.889 [2024-11-20T15:04:07.825Z] Total : 12766.52 49.87 0.00 0.00 79901.27 12888.75 75584.85 00:09:31.889 { 00:09:31.889 "results": [ 00:09:31.889 { 00:09:31.889 "job": "NVMe0n1", 00:09:31.889 "core_mask": "0x1", 00:09:31.889 "workload": "verify", 00:09:31.889 "status": "finished", 00:09:31.889 "verify_range": { 00:09:31.889 "start": 0, 00:09:31.889 "length": 16384 00:09:31.889 }, 00:09:31.889 "queue_depth": 1024, 00:09:31.889 "io_size": 4096, 00:09:31.889 "runtime": 10.04933, 00:09:31.889 "iops": 12766.522743307265, 00:09:31.889 "mibps": 49.869229466044004, 00:09:31.889 "io_failed": 0, 00:09:31.889 "io_timeout": 0, 00:09:31.889 "avg_latency_us": 79901.272057056, 00:09:31.889 "min_latency_us": 12888.746666666666, 00:09:31.889 "max_latency_us": 75584.85333333333 00:09:31.889 } 00:09:31.889 ], 00:09:31.889 "core_count": 1 00:09:31.889 } 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1114995 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1114995 ']' 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1114995 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1114995 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1114995' 00:09:31.890 killing process with pid 1114995 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1114995 00:09:31.890 Received shutdown signal, test time was about 10.000000 seconds 00:09:31.890 00:09:31.890 Latency(us) 00:09:31.890 [2024-11-20T15:04:07.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.890 [2024-11-20T15:04:07.826Z] =================================================================================================================== 00:09:31.890 [2024-11-20T15:04:07.826Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1114995 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:31.890 rmmod nvme_tcp 00:09:31.890 rmmod nvme_fabrics 00:09:31.890 rmmod nvme_keyring 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1114739 ']' 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1114739 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1114739 ']' 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1114739 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.890 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1114739 00:09:32.151 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:32.151 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:32.151 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1114739' 00:09:32.151 killing process with pid 1114739 00:09:32.151 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1114739 00:09:32.151 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1114739 00:09:32.151 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:32.151 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:32.151 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:32.151 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:32.151 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:32.151 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:32.151 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:32.151 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:32.151 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:32.151 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.151 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.151 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.699 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:34.699 00:09:34.699 real 0m22.557s 00:09:34.699 user 0m25.931s 00:09:34.699 sys 0m7.017s 00:09:34.699 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.699 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.699 ************************************ 00:09:34.699 END TEST nvmf_queue_depth 00:09:34.699 ************************************ 00:09:34.699 16:04:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:34.699 16:04:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:34.699 16:04:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.699 16:04:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:34.699 ************************************ 00:09:34.699 START TEST nvmf_target_multipath 00:09:34.699 ************************************ 00:09:34.699 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:34.699 * Looking for test storage... 00:09:34.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:34.699 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:34.699 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:34.699 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:34.699 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:34.699 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.699 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.699 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.699 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.699 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.699 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:34.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.700 --rc genhtml_branch_coverage=1 00:09:34.700 --rc genhtml_function_coverage=1 00:09:34.700 --rc genhtml_legend=1 00:09:34.700 --rc geninfo_all_blocks=1 00:09:34.700 --rc geninfo_unexecuted_blocks=1 00:09:34.700 00:09:34.700 ' 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:34.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.700 --rc genhtml_branch_coverage=1 00:09:34.700 --rc genhtml_function_coverage=1 00:09:34.700 --rc genhtml_legend=1 00:09:34.700 --rc geninfo_all_blocks=1 00:09:34.700 --rc geninfo_unexecuted_blocks=1 00:09:34.700 00:09:34.700 ' 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:34.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.700 --rc genhtml_branch_coverage=1 00:09:34.700 --rc genhtml_function_coverage=1 00:09:34.700 --rc genhtml_legend=1 00:09:34.700 --rc geninfo_all_blocks=1 00:09:34.700 --rc geninfo_unexecuted_blocks=1 00:09:34.700 00:09:34.700 ' 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:34.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.700 --rc genhtml_branch_coverage=1 00:09:34.700 --rc genhtml_function_coverage=1 00:09:34.700 --rc genhtml_legend=1 00:09:34.700 --rc geninfo_all_blocks=1 00:09:34.700 --rc geninfo_unexecuted_blocks=1 00:09:34.700 00:09:34.700 ' 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:34.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:34.700 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:34.701 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.701 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.701 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.701 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:34.701 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:34.701 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:34.701 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:42.840 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:42.840 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:42.840 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:42.840 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:42.840 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:42.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.545 ms 00:09:42.841 00:09:42.841 --- 10.0.0.2 ping statistics --- 00:09:42.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.841 rtt min/avg/max/mdev = 0.545/0.545/0.545/0.000 ms 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:42.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:09:42.841 00:09:42.841 --- 10.0.0.1 ping statistics --- 00:09:42.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.841 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:42.841 only one NIC for nvmf test 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:42.841 rmmod nvme_tcp 00:09:42.841 rmmod nvme_fabrics 00:09:42.841 rmmod nvme_keyring 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:42.841 16:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:42.841 16:04:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:42.841 16:04:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.841 16:04:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.841 16:04:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.228 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:44.228 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:44.228 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:44.228 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:44.228 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:44.228 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:44.228 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:44.228 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:44.229 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:44.229 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:44.229 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:44.229 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:44.229 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:44.229 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:44.229 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:44.229 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:44.229 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:44.229 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:44.229 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:44.229 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:44.229 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:44.229 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:44.229 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.229 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.229 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.229 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:44.229 00:09:44.229 real 0m9.970s 00:09:44.229 user 0m2.243s 00:09:44.229 sys 0m5.685s 00:09:44.229 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.229 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:44.229 ************************************ 00:09:44.229 END TEST nvmf_target_multipath 00:09:44.229 ************************************ 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:44.490 ************************************ 00:09:44.490 START TEST nvmf_zcopy 00:09:44.490 ************************************ 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:44.490 * Looking for test storage... 00:09:44.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.490 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.491 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.491 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:44.491 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.491 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:44.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.491 --rc genhtml_branch_coverage=1 00:09:44.491 --rc genhtml_function_coverage=1 00:09:44.491 --rc genhtml_legend=1 00:09:44.491 --rc geninfo_all_blocks=1 00:09:44.491 --rc geninfo_unexecuted_blocks=1 00:09:44.491 00:09:44.491 ' 00:09:44.491 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:44.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.491 --rc genhtml_branch_coverage=1 00:09:44.491 --rc genhtml_function_coverage=1 00:09:44.491 --rc genhtml_legend=1 00:09:44.491 --rc geninfo_all_blocks=1 00:09:44.491 --rc geninfo_unexecuted_blocks=1 00:09:44.491 00:09:44.491 ' 00:09:44.491 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:44.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.491 --rc genhtml_branch_coverage=1 00:09:44.491 --rc genhtml_function_coverage=1 00:09:44.491 --rc genhtml_legend=1 00:09:44.491 --rc geninfo_all_blocks=1 00:09:44.491 --rc geninfo_unexecuted_blocks=1 00:09:44.491 00:09:44.491 ' 00:09:44.491 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:44.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.491 --rc genhtml_branch_coverage=1 00:09:44.491 --rc genhtml_function_coverage=1 00:09:44.491 --rc genhtml_legend=1 00:09:44.491 --rc geninfo_all_blocks=1 00:09:44.491 --rc geninfo_unexecuted_blocks=1 00:09:44.491 00:09:44.491 ' 00:09:44.491 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.491 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:44.491 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.491 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.491 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.491 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.491 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.491 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.491 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.491 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.491 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.491 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:44.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:44.753 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.895 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:52.896 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:52.896 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:52.896 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:52.896 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:52.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.716 ms 00:09:52.896 00:09:52.896 --- 10.0.0.2 ping statistics --- 00:09:52.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.896 rtt min/avg/max/mdev = 0.716/0.716/0.716/0.000 ms 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:52.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:09:52.896 00:09:52.896 --- 10.0.0.1 ping statistics --- 00:09:52.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.896 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1125785 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1125785 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1125785 ']' 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.896 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.896 [2024-11-20 16:04:28.018403] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:09:52.896 [2024-11-20 16:04:28.018472] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.896 [2024-11-20 16:04:28.123646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.896 [2024-11-20 16:04:28.173178] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:52.896 [2024-11-20 16:04:28.173233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:52.896 [2024-11-20 16:04:28.173241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:52.896 [2024-11-20 16:04:28.173248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:52.897 [2024-11-20 16:04:28.173254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:52.897 [2024-11-20 16:04:28.174005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.158 [2024-11-20 16:04:28.883574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.158 [2024-11-20 16:04:28.907896] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.158 malloc0 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.158 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.159 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.159 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:53.159 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:53.159 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:53.159 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:53.159 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:53.159 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:53.159 { 00:09:53.159 "params": { 00:09:53.159 "name": "Nvme$subsystem", 00:09:53.159 "trtype": "$TEST_TRANSPORT", 00:09:53.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.159 "adrfam": "ipv4", 00:09:53.159 "trsvcid": "$NVMF_PORT", 00:09:53.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.159 "hdgst": ${hdgst:-false}, 00:09:53.159 "ddgst": ${ddgst:-false} 00:09:53.159 }, 00:09:53.159 "method": "bdev_nvme_attach_controller" 00:09:53.159 } 00:09:53.159 EOF 00:09:53.159 )") 00:09:53.159 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:53.159 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:53.159 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:53.159 16:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:53.159 "params": { 00:09:53.159 "name": "Nvme1", 00:09:53.159 "trtype": "tcp", 00:09:53.159 "traddr": "10.0.0.2", 00:09:53.159 "adrfam": "ipv4", 00:09:53.159 "trsvcid": "4420", 00:09:53.159 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.159 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.159 "hdgst": false, 00:09:53.159 "ddgst": false 00:09:53.159 }, 00:09:53.159 "method": "bdev_nvme_attach_controller" 00:09:53.159 }' 00:09:53.159 [2024-11-20 16:04:29.005658] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:09:53.159 [2024-11-20 16:04:29.005724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1125826 ] 00:09:53.427 [2024-11-20 16:04:29.099872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.427 [2024-11-20 16:04:29.154231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.692 Running I/O for 10 seconds... 00:09:56.021 6446.00 IOPS, 50.36 MiB/s [2024-11-20T15:04:32.899Z] 7574.50 IOPS, 59.18 MiB/s [2024-11-20T15:04:33.841Z] 8308.00 IOPS, 64.91 MiB/s [2024-11-20T15:04:34.781Z] 8673.25 IOPS, 67.76 MiB/s [2024-11-20T15:04:35.722Z] 8898.80 IOPS, 69.52 MiB/s [2024-11-20T15:04:36.665Z] 9042.67 IOPS, 70.65 MiB/s [2024-11-20T15:04:37.606Z] 9146.86 IOPS, 71.46 MiB/s [2024-11-20T15:04:38.990Z] 9225.75 IOPS, 72.08 MiB/s [2024-11-20T15:04:39.562Z] 9286.56 IOPS, 72.55 MiB/s [2024-11-20T15:04:39.823Z] 9334.50 IOPS, 72.93 MiB/s 00:10:03.887 Latency(us) 00:10:03.887 [2024-11-20T15:04:39.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.887 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:03.887 Verification LBA range: start 0x0 length 0x1000 00:10:03.887 Nvme1n1 : 10.01 9334.83 72.93 0.00 0.00 13666.49 1870.51 27962.03 00:10:03.887 [2024-11-20T15:04:39.823Z] =================================================================================================================== 00:10:03.887 [2024-11-20T15:04:39.823Z] Total : 9334.83 72.93 0.00 0.00 13666.49 1870.51 27962.03 00:10:03.887 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1127999 00:10:03.887 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:03.887 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.887 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:03.887 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:03.887 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:03.887 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:03.887 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:03.887 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:03.887 { 00:10:03.887 "params": { 00:10:03.887 "name": "Nvme$subsystem", 00:10:03.887 "trtype": "$TEST_TRANSPORT", 00:10:03.887 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.887 "adrfam": "ipv4", 00:10:03.887 "trsvcid": "$NVMF_PORT", 00:10:03.887 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.887 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.887 "hdgst": ${hdgst:-false}, 00:10:03.887 "ddgst": ${ddgst:-false} 00:10:03.887 }, 00:10:03.887 "method": "bdev_nvme_attach_controller" 00:10:03.887 } 00:10:03.887 EOF 00:10:03.887 )") 00:10:03.887 [2024-11-20 16:04:39.668390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.887 [2024-11-20 16:04:39.668418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.887 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:03.887 [2024-11-20 16:04:39.676384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.887 [2024-11-20 16:04:39.676393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.887 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:03.887 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:03.887 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:03.887 "params": { 00:10:03.887 "name": "Nvme1", 00:10:03.887 "trtype": "tcp", 00:10:03.887 "traddr": "10.0.0.2", 00:10:03.887 "adrfam": "ipv4", 00:10:03.887 "trsvcid": "4420", 00:10:03.887 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.887 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.887 "hdgst": false, 00:10:03.887 "ddgst": false 00:10:03.887 }, 00:10:03.887 "method": "bdev_nvme_attach_controller" 00:10:03.887 }' 00:10:03.887 [2024-11-20 16:04:39.684402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.887 [2024-11-20 16:04:39.684410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.887 [2024-11-20 16:04:39.692421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.887 [2024-11-20 16:04:39.692429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.887 [2024-11-20 16:04:39.700442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.887 [2024-11-20 16:04:39.700449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.887 [2024-11-20 16:04:39.712471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.887 [2024-11-20 16:04:39.712479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.887 [2024-11-20 16:04:39.713519] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:10:03.887 [2024-11-20 16:04:39.713566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127999 ] 00:10:03.887 [2024-11-20 16:04:39.724502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.887 [2024-11-20 16:04:39.724510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.887 [2024-11-20 16:04:39.736533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.887 [2024-11-20 16:04:39.736541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.887 [2024-11-20 16:04:39.748564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.887 [2024-11-20 16:04:39.748572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.887 [2024-11-20 16:04:39.760596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.887 [2024-11-20 16:04:39.760604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.887 [2024-11-20 16:04:39.772626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.887 [2024-11-20 16:04:39.772634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.887 [2024-11-20 16:04:39.780647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.887 [2024-11-20 16:04:39.780654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.887 [2024-11-20 16:04:39.788669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.887 [2024-11-20 16:04:39.788676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.887 [2024-11-20 16:04:39.796199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.887 [2024-11-20 16:04:39.796690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.887 [2024-11-20 16:04:39.796697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.887 [2024-11-20 16:04:39.804711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.887 [2024-11-20 16:04:39.804721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.887 [2024-11-20 16:04:39.812730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.887 [2024-11-20 16:04:39.812739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.887 [2024-11-20 16:04:39.820751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.887 [2024-11-20 16:04:39.820761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.148 [2024-11-20 16:04:39.826839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.148 [2024-11-20 16:04:39.828772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.148 [2024-11-20 16:04:39.828781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.148 [2024-11-20 16:04:39.836794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.148 [2024-11-20 16:04:39.836802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.148 [2024-11-20 16:04:39.844820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.148 [2024-11-20 16:04:39.844831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.148 [2024-11-20 16:04:39.852837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.148 [2024-11-20 16:04:39.852847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.148 [2024-11-20 16:04:39.860858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.148 [2024-11-20 16:04:39.860869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.148 [2024-11-20 16:04:39.868879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.148 [2024-11-20 16:04:39.868887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.148 [2024-11-20 16:04:39.876898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.148 [2024-11-20 16:04:39.876907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.148 [2024-11-20 16:04:39.884917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.148 [2024-11-20 16:04:39.884924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.148 [2024-11-20 16:04:39.892944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.148 [2024-11-20 16:04:39.892956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.148 [2024-11-20 16:04:39.900961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.148 [2024-11-20 16:04:39.900972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.148 [2024-11-20 16:04:39.908979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.148 [2024-11-20 16:04:39.908988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.148 [2024-11-20 16:04:39.917000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.148 [2024-11-20 16:04:39.917009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.148 [2024-11-20 16:04:39.925020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.148 [2024-11-20 16:04:39.925030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.148 [2024-11-20 16:04:39.933041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.148 [2024-11-20 16:04:39.933051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.148 [2024-11-20 16:04:39.941061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.148 [2024-11-20 16:04:39.941068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.148 [2024-11-20 16:04:39.949081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.148 [2024-11-20 16:04:39.949088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.148 [2024-11-20 16:04:39.957102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.148 [2024-11-20 16:04:39.957109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.148 [2024-11-20 16:04:39.965123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.148 [2024-11-20 16:04:39.965131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.148 [2024-11-20 16:04:39.973145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.148 [2024-11-20 16:04:39.973153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.148 [2024-11-20 16:04:39.981169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.148 [2024-11-20 16:04:39.981178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.148 [2024-11-20 16:04:39.989189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.148 [2024-11-20 16:04:39.989196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.149 [2024-11-20 16:04:39.997211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.149 [2024-11-20 16:04:39.997217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.149 [2024-11-20 16:04:40.005236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.149 [2024-11-20 16:04:40.005245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.149 [2024-11-20 16:04:40.013297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.149 [2024-11-20 16:04:40.013313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.149 [2024-11-20 16:04:40.021282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.149 [2024-11-20 16:04:40.021291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.149 [2024-11-20 16:04:40.029301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.149 [2024-11-20 16:04:40.029309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.149 [2024-11-20 16:04:40.037322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.149 [2024-11-20 16:04:40.037330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.149 [2024-11-20 16:04:40.045341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.149 [2024-11-20 16:04:40.045348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.149 [2024-11-20 16:04:40.053362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.149 [2024-11-20 16:04:40.053369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.149 [2024-11-20 16:04:40.061383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.149 [2024-11-20 16:04:40.061391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.149 [2024-11-20 16:04:40.069404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.149 [2024-11-20 16:04:40.069411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.114595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.114610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.121544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.121553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 Running I/O for 5 seconds... 00:10:04.410 [2024-11-20 16:04:40.129562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.129570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.140554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.140570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.148690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.148704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.157707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.157722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.166490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.166505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.175216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.175231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.184252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.184266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.192833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.192847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.201935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.201950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.210394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.210408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.219602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.219616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.228627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.228641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.236997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.237011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.245925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.245939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.254621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.254635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.263381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.263395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.272455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.272474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.281058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.281072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.290331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.290345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.298925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.298939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.308193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.308207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.316694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.316708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.325589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.410 [2024-11-20 16:04:40.325603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.410 [2024-11-20 16:04:40.334277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.411 [2024-11-20 16:04:40.334291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.411 [2024-11-20 16:04:40.342832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.411 [2024-11-20 16:04:40.342846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.671 [2024-11-20 16:04:40.351253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.671 [2024-11-20 16:04:40.351268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.671 [2024-11-20 16:04:40.360163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.671 [2024-11-20 16:04:40.360177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.671 [2024-11-20 16:04:40.368917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.671 [2024-11-20 16:04:40.368930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.671 [2024-11-20 16:04:40.377583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.671 [2024-11-20 16:04:40.377596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.671 [2024-11-20 16:04:40.386413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.671 [2024-11-20 16:04:40.386428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.671 [2024-11-20 16:04:40.395106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.671 [2024-11-20 16:04:40.395120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.671 [2024-11-20 16:04:40.404238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.671 [2024-11-20 16:04:40.404252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.671 [2024-11-20 16:04:40.412870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.671 [2024-11-20 16:04:40.412883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.671 [2024-11-20 16:04:40.421725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.671 [2024-11-20 16:04:40.421739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.672 [2024-11-20 16:04:40.430575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.672 [2024-11-20 16:04:40.430590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.672 [2024-11-20 16:04:40.439057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.672 [2024-11-20 16:04:40.439075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.672 [2024-11-20 16:04:40.447386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.672 [2024-11-20 16:04:40.447400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.672 [2024-11-20 16:04:40.456218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.672 [2024-11-20 16:04:40.456232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.672 [2024-11-20 16:04:40.465261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.672 [2024-11-20 16:04:40.465275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.672 [2024-11-20 16:04:40.474235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.672 [2024-11-20 16:04:40.474249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.672 [2024-11-20 16:04:40.482794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.672 [2024-11-20 16:04:40.482808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.672 [2024-11-20 16:04:40.491799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.672 [2024-11-20 16:04:40.491812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.672 [2024-11-20 16:04:40.500738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.672 [2024-11-20 16:04:40.500752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.672 [2024-11-20 16:04:40.509721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.672 [2024-11-20 16:04:40.509735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.672 [2024-11-20 16:04:40.518511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.672 [2024-11-20 16:04:40.518525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.672 [2024-11-20 16:04:40.527041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.672 [2024-11-20 16:04:40.527055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.672 [2024-11-20 16:04:40.535895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.672 [2024-11-20 16:04:40.535909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.672 [2024-11-20 16:04:40.545107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.672 [2024-11-20 16:04:40.545121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.672 [2024-11-20 16:04:40.554018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.672 [2024-11-20 16:04:40.554033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.672 [2024-11-20 16:04:40.562324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.672 [2024-11-20 16:04:40.562338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.672 [2024-11-20 16:04:40.571294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.672 [2024-11-20 16:04:40.571307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.672 [2024-11-20 16:04:40.580043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.672 [2024-11-20 16:04:40.580056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.672 [2024-11-20 16:04:40.588425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.672 [2024-11-20 16:04:40.588439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.672 [2024-11-20 16:04:40.597624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.672 [2024-11-20 16:04:40.597638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.606698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.606718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.615277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.615291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.623969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.623982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.632257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.632271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.640673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.640687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.649368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.649382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.657804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.657818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.666978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.666992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.676093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.676108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.684734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.684747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.693544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.693558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.702302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.702316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.711420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.711434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.720263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.720277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.728856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.728871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.737844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.737858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.746428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.746441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.755467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.755482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.763767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.763781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.772781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.772796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.781416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.781430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.790408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.790422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.798893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.798907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.807526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.807542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.816171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.816185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.825230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.825244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.833776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.833790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.842683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.842697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.932 [2024-11-20 16:04:40.851262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.932 [2024-11-20 16:04:40.851276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.933 [2024-11-20 16:04:40.860068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.933 [2024-11-20 16:04:40.860082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:40.868550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:40.868565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:40.877848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:40.877862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:40.885838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:40.885852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:40.894618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:40.894633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:40.903514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:40.903528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:40.912086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:40.912100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:40.920560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:40.920574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:40.929407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:40.929421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:40.938326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:40.938340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:40.947393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:40.947407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:40.956380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:40.956394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:40.965242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:40.965255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:40.974300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:40.974315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:40.983166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:40.983180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:40.991970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:40.991983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:41.000849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:41.000863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:41.009409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:41.009423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:41.018164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:41.018178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:41.027223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:41.027238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:41.036419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:41.036434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:41.044521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:41.044535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:41.053353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:41.053368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:41.062060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:41.062074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:41.070708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:41.070722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:41.079410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:41.079423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:41.088038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:41.088052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:41.097221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:41.097235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:41.105977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:41.105991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:41.113970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:41.113984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-11-20 16:04:41.123348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-11-20 16:04:41.123362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 19091.00 IOPS, 149.15 MiB/s [2024-11-20T15:04:41.391Z] [2024-11-20 16:04:41.132438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.132452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.140814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.140830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.149250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.149265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.158222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.158236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.167150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.167171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.176124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.176139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.184426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.184440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.193473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.193487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.201857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.201871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.210597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.210611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.219685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.219700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.228321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.228336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.237224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.237238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.246326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.246341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.255273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.255287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.264370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.264388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.273532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.273546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.282460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.282474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.290918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.290932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.299944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.299959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.308977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.308991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.318034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.318048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.326613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.326627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.335212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.335228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.343803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.343818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.352789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.352804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.361841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.361856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.370920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.370934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.379771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.379785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.455 [2024-11-20 16:04:41.388486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.455 [2024-11-20 16:04:41.388501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.397482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.397497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.406128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.406142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.415050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.415064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.424255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.424270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.432199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.432218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.441042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.441057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.449797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.449812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.459060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.459075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.468170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.468184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.477131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.477146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.485586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.485600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.494531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.494546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.503548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.503562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.512087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.512102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.520976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.520991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.529702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.529716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.537486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.537500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.546230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.546244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.555106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.555121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.563878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.563892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.572447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.572462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.581610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.581625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.590151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.590170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.598470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.598488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.607701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.607716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.616103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.616117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.625225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.625240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.634222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.634237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.716 [2024-11-20 16:04:41.642867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.716 [2024-11-20 16:04:41.642881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.651369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.651383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.660146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.660165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.668754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.668769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.677601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.677615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.686703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.686717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.695044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.695058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.703318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.703331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.712288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.712302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.720817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.720831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.729525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.729539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.738698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.738712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.747017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.747031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.756269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.756284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.764907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.764924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.773896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.773909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.782395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.782409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.791476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.791490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.800482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.800497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.809097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.809111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.817676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.817690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.826436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.826451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.834844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.834858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.844169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.844183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.852128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.852142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.861171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.861185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.870135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.870149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.879060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.879074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.888200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.888215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.897181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.897196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.984 [2024-11-20 16:04:41.906463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.984 [2024-11-20 16:04:41.906477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:41.915412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:41.915427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:41.924691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:41.924705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:41.933283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:41.933297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:41.942082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:41.942096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:41.950939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:41.950953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:41.959373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:41.959387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:41.968270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:41.968284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:41.976858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:41.976872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:41.985560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:41.985574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:41.994626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:41.994640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:42.004066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:42.004080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:42.012631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:42.012646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:42.020858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:42.020872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:42.030068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:42.030082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:42.039325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:42.039339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:42.048127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:42.048141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:42.056827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:42.056841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:42.065631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:42.065645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:42.074526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:42.074540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:42.083193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:42.083207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:42.092325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:42.092339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:42.101052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:42.101066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:42.109914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:42.109929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:42.119321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:42.119336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:42.127898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:42.127912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 19170.00 IOPS, 149.77 MiB/s [2024-11-20T15:04:42.247Z] [2024-11-20 16:04:42.136938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:42.136953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:42.146074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.311 [2024-11-20 16:04:42.146089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.311 [2024-11-20 16:04:42.154558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.312 [2024-11-20 16:04:42.154572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.312 [2024-11-20 16:04:42.163246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.312 [2024-11-20 16:04:42.163261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.312 [2024-11-20 16:04:42.172126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.312 [2024-11-20 16:04:42.172141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.312 [2024-11-20 16:04:42.180653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.312 [2024-11-20 16:04:42.180667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.312 [2024-11-20 16:04:42.189007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.312 [2024-11-20 16:04:42.189021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.312 [2024-11-20 16:04:42.198246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.312 [2024-11-20 16:04:42.198260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.312 [2024-11-20 16:04:42.206456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.312 [2024-11-20 16:04:42.206470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.312 [2024-11-20 16:04:42.215540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.312 [2024-11-20 16:04:42.215554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.224073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.224087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.232467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.232481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.241929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.241943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.250905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.250920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.259544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.259562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.268298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.268313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.276679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.276693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.285186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.285200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.293612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.293626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.302417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.302432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.311639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.311654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.320696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.320710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.329236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.329251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.337836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.337850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.346392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.346407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.355062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.355076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.364210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.364225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.372710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.372724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.381298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.381312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.389925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.389939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.398926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.398940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.407752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.407766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.416610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.416624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.425435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.425453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.434257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.434272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.442541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.442555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.451101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.451115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.459395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.459409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.468125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.468139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.476681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.665 [2024-11-20 16:04:42.476695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.665 [2024-11-20 16:04:42.485068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.666 [2024-11-20 16:04:42.485082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.666 [2024-11-20 16:04:42.494003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.666 [2024-11-20 16:04:42.494017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.666 [2024-11-20 16:04:42.503113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.666 [2024-11-20 16:04:42.503127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.666 [2024-11-20 16:04:42.511304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.666 [2024-11-20 16:04:42.511318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.666 [2024-11-20 16:04:42.520580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.666 [2024-11-20 16:04:42.520594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.666 [2024-11-20 16:04:42.528782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.666 [2024-11-20 16:04:42.528796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.666 [2024-11-20 16:04:42.537339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.666 [2024-11-20 16:04:42.537353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.666 [2024-11-20 16:04:42.545820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.666 [2024-11-20 16:04:42.545834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.666 [2024-11-20 16:04:42.555129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.666 [2024-11-20 16:04:42.555144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.666 [2024-11-20 16:04:42.563136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.666 [2024-11-20 16:04:42.563150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.666 [2024-11-20 16:04:42.571911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.666 [2024-11-20 16:04:42.571925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.666 [2024-11-20 16:04:42.580769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.666 [2024-11-20 16:04:42.580783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.666 [2024-11-20 16:04:42.588688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.666 [2024-11-20 16:04:42.588706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.666 [2024-11-20 16:04:42.597685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.666 [2024-11-20 16:04:42.597699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.606741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.606755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.615434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.615448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.624298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.624312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.632955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.632969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.641441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.641455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.650076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.650090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.659225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.659239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.667846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.667860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.676202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.676216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.685263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.685278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.693671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.693685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.703009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.703023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.711477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.711491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.720448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.720463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.729190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.729205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.738176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.738190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.746600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.746614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.755467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.755485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.764546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.764560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.772984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.772998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.781650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.781665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.790814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.790828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.799605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.799619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.808030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.808044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.817015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.817030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.826251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.826265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.834691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.834705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.843627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.843642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.852841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.852856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.928 [2024-11-20 16:04:42.861973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.928 [2024-11-20 16:04:42.861988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:42.871059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:42.871074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:42.879845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:42.879860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:42.888143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:42.888162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:42.897262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:42.897277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:42.905443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:42.905457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:42.914316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:42.914330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:42.923086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:42.923104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:42.931471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:42.931486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:42.940301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:42.940315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:42.949345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:42.949359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:42.957962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:42.957977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:42.966766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:42.966781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:42.975794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:42.975809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:42.984442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:42.984457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:42.993165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:42.993179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:43.001697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:43.001712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:43.010762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:43.010777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:43.019942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:43.019957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:43.028523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:43.028537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:43.037691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:43.037706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:43.046636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:43.046651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:43.055070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:43.055085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:43.064276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:43.064291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:43.072990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:43.073005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:43.082149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:43.082170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:43.090678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:43.090693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:43.099479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:43.099494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:43.108869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:43.108883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.191 [2024-11-20 16:04:43.117967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.191 [2024-11-20 16:04:43.117981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.126384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.126399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 19205.00 IOPS, 150.04 MiB/s [2024-11-20T15:04:43.389Z] [2024-11-20 16:04:43.135378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.135393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.144432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.144447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.153387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.153402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.161703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.161718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.170521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.170536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.178884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.178899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.187902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.187917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.196900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.196915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.205433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.205448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.214785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.214800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.223260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.223275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.231903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.231918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.240922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.240937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.249326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.249341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.258044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.258058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.267161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.267176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.275879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.275893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.284352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.284366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.293169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.293184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.302472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.302487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.311198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.311213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.320650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.320665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.329153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.329171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.337870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.337884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.346595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.346610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.355284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.355298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.364377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.364391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.372802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.372817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 16:04:43.381972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 16:04:43.381986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 16:04:43.391101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.714 [2024-11-20 16:04:43.391116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 16:04:43.399878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.714 [2024-11-20 16:04:43.399892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 16:04:43.408580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.714 [2024-11-20 16:04:43.408594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 16:04:43.417534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.714 [2024-11-20 16:04:43.417552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 16:04:43.426027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.714 [2024-11-20 16:04:43.426041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 16:04:43.434482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.714 [2024-11-20 16:04:43.434497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 16:04:43.443231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.714 [2024-11-20 16:04:43.443245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 16:04:43.451627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.714 [2024-11-20 16:04:43.451641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 16:04:43.460625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.714 [2024-11-20 16:04:43.460639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 16:04:43.469727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.715 [2024-11-20 16:04:43.469741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.715 [2024-11-20 16:04:43.478201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.715 [2024-11-20 16:04:43.478215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.715 [2024-11-20 16:04:43.487380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.715 [2024-11-20 16:04:43.487395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.715 [2024-11-20 16:04:43.496375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.715 [2024-11-20 16:04:43.496389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.715 [2024-11-20 16:04:43.504960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.715 [2024-11-20 16:04:43.504974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.715 [2024-11-20 16:04:43.514247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.715 [2024-11-20 16:04:43.514261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.715 [2024-11-20 16:04:43.522900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.715 [2024-11-20 16:04:43.522914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.715 [2024-11-20 16:04:43.531609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.715 [2024-11-20 16:04:43.531623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.715 [2024-11-20 16:04:43.540426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.715 [2024-11-20 16:04:43.540441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.715 [2024-11-20 16:04:43.549361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.715 [2024-11-20 16:04:43.549375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.715 [2024-11-20 16:04:43.557947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.715 [2024-11-20 16:04:43.557961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.715 [2024-11-20 16:04:43.566825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.715 [2024-11-20 16:04:43.566839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.715 [2024-11-20 16:04:43.575751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.715 [2024-11-20 16:04:43.575765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.715 [2024-11-20 16:04:43.584558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.715 [2024-11-20 16:04:43.584575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.715 [2024-11-20 16:04:43.593164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.715 [2024-11-20 16:04:43.593178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.715 [2024-11-20 16:04:43.602246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.715 [2024-11-20 16:04:43.602260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.715 [2024-11-20 16:04:43.610905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.715 [2024-11-20 16:04:43.610919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.715 [2024-11-20 16:04:43.619950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.715 [2024-11-20 16:04:43.619965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.715 [2024-11-20 16:04:43.629185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.715 [2024-11-20 16:04:43.629200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.715 [2024-11-20 16:04:43.638336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.715 [2024-11-20 16:04:43.638350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.715 [2024-11-20 16:04:43.647454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.715 [2024-11-20 16:04:43.647468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.656291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.656305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.664832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.664846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.674016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.674030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.682676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.682691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.691776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.691790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.701012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.701026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.709774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.709788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.718706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.718721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.727048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.727062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.735424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.735438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.744199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.744213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.753614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.753632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.761552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.761566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.770416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.770430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.779174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.779188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.788296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.788310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.797185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.797199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.805624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.805638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.814427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.814441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.823137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.823151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.832329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.832343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.840861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.840875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.849744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.849758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.858112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.858126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.866934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.866948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.875993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.876007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.884619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.884633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.893507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.893522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.902210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.902224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.977 [2024-11-20 16:04:43.910489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.977 [2024-11-20 16:04:43.910503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:43.920008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:43.920026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:43.929061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:43.929075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:43.938082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:43.938096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:43.946977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:43.946991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:43.956268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:43.956282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:43.964974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:43.964989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:43.973932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:43.973947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:43.982148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:43.982166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:43.990673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:43.990687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:44.003919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:44.003934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:44.011812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:44.011826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:44.020416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:44.020430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:44.028694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:44.028708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:44.037654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:44.037669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:44.046540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:44.046554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:44.055142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:44.055156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:44.063683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:44.063697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:44.072728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:44.072742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:44.081518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:44.081532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:44.089994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:44.090008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:44.098698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:44.098713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:44.106605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:44.106619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:44.115609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:44.115623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:44.124771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:44.124785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:44.133349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:44.133363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 19209.50 IOPS, 150.07 MiB/s [2024-11-20T15:04:44.175Z] [2024-11-20 16:04:44.141649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:44.141664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:44.150428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:44.150443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:44.159250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:44.159264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.239 [2024-11-20 16:04:44.168369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.239 [2024-11-20 16:04:44.168383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.177426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.177440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.186301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.186315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.194891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.194906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.203790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.203805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.212826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.212840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.221941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.221955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.231027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.231042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.240092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.240106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.249025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.249039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.257554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.257568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.266177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.266191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.275243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.275258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.283952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.283966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.292806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.292821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.300979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.300993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.309772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.309786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.318528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.318542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.326934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.326948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.335814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.335828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.344584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.344598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.353312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.353326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.362178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.362192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.371393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.371407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.379843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.379858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.389005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.389020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.397889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.397903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.406710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.406724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.415721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.415735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.424451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.424465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 16:04:44.433363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 16:04:44.433377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.441815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.441830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.450560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.450575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.459206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.459220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.468443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.468457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.477011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.477026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.486077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.486091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.494435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.494449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.503216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.503230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.512302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.512317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.520952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.520966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.529547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.529561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.538862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.538876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.547534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.547549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.556331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.556345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.565096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.565111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.574153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.574173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.583084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.583101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.591960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.591975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.600560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.600574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.609432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.609447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.618237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.618252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.627406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.627420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.635802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.635817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.644450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.644464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.653352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.653367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.662322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.662336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.670787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.670802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.679865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.679880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.761 [2024-11-20 16:04:44.688890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.761 [2024-11-20 16:04:44.688904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.020 [2024-11-20 16:04:44.697875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.020 [2024-11-20 16:04:44.697889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.020 [2024-11-20 16:04:44.706510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.020 [2024-11-20 16:04:44.706525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.020 [2024-11-20 16:04:44.715316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.020 [2024-11-20 16:04:44.715330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.020 [2024-11-20 16:04:44.723600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.020 [2024-11-20 16:04:44.723615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.020 [2024-11-20 16:04:44.732298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.020 [2024-11-20 16:04:44.732312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.020 [2024-11-20 16:04:44.740743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.020 [2024-11-20 16:04:44.740759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.020 [2024-11-20 16:04:44.749522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.020 [2024-11-20 16:04:44.749540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.020 [2024-11-20 16:04:44.758498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.758513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.021 [2024-11-20 16:04:44.766671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.766685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.021 [2024-11-20 16:04:44.775668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.775682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.021 [2024-11-20 16:04:44.784695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.784710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.021 [2024-11-20 16:04:44.793170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.793184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.021 [2024-11-20 16:04:44.802121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.802136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.021 [2024-11-20 16:04:44.811005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.811020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.021 [2024-11-20 16:04:44.819752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.819766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.021 [2024-11-20 16:04:44.828493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.828507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.021 [2024-11-20 16:04:44.837508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.837523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.021 [2024-11-20 16:04:44.846251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.846265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.021 [2024-11-20 16:04:44.854675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.854690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.021 [2024-11-20 16:04:44.863185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.863200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.021 [2024-11-20 16:04:44.872041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.872055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.021 [2024-11-20 16:04:44.881151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.881170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.021 [2024-11-20 16:04:44.890374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.890389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.021 [2024-11-20 16:04:44.898961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.898976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.021 [2024-11-20 16:04:44.907957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.907972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.021 [2024-11-20 16:04:44.916763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.916781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.021 [2024-11-20 16:04:44.925376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.925391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.021 [2024-11-20 16:04:44.934253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.934268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.021 [2024-11-20 16:04:44.942827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.942842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.021 [2024-11-20 16:04:44.951937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.021 [2024-11-20 16:04:44.951952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 [2024-11-20 16:04:44.961066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.280 [2024-11-20 16:04:44.961081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 [2024-11-20 16:04:44.969747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.280 [2024-11-20 16:04:44.969761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 [2024-11-20 16:04:44.978852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.280 [2024-11-20 16:04:44.978867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 [2024-11-20 16:04:44.987784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.280 [2024-11-20 16:04:44.987798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 [2024-11-20 16:04:44.996315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.280 [2024-11-20 16:04:44.996329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 [2024-11-20 16:04:45.005055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.280 [2024-11-20 16:04:45.005070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 [2024-11-20 16:04:45.014066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.280 [2024-11-20 16:04:45.014081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 [2024-11-20 16:04:45.023280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.280 [2024-11-20 16:04:45.023295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 [2024-11-20 16:04:45.032427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.280 [2024-11-20 16:04:45.032441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 [2024-11-20 16:04:45.041353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.280 [2024-11-20 16:04:45.041367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 [2024-11-20 16:04:45.049643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.280 [2024-11-20 16:04:45.049658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 [2024-11-20 16:04:45.058501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.280 [2024-11-20 16:04:45.058515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 [2024-11-20 16:04:45.067375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.280 [2024-11-20 16:04:45.067389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 [2024-11-20 16:04:45.076121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.280 [2024-11-20 16:04:45.076135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 [2024-11-20 16:04:45.085193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.280 [2024-11-20 16:04:45.085212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 [2024-11-20 16:04:45.093968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.280 [2024-11-20 16:04:45.093983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 [2024-11-20 16:04:45.102328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.280 [2024-11-20 16:04:45.102342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 [2024-11-20 16:04:45.110924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.280 [2024-11-20 16:04:45.110938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 [2024-11-20 16:04:45.119963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.280 [2024-11-20 16:04:45.119977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 [2024-11-20 16:04:45.128463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.280 [2024-11-20 16:04:45.128477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 [2024-11-20 16:04:45.137247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.280 [2024-11-20 16:04:45.137261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 19234.80 IOPS, 150.27 MiB/s [2024-11-20T15:04:45.216Z] [2024-11-20 16:04:45.143389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.280 [2024-11-20 16:04:45.143403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.280 00:10:09.281 Latency(us) 00:10:09.281 [2024-11-20T15:04:45.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.281 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:09.281 Nvme1n1 : 5.01 19239.41 150.31 0.00 0.00 6648.16 2757.97 17148.59 00:10:09.281 [2024-11-20T15:04:45.217Z] =================================================================================================================== 00:10:09.281 [2024-11-20T15:04:45.217Z] Total : 19239.41 150.31 0.00 0.00 6648.16 2757.97 17148.59 00:10:09.281 [2024-11-20 16:04:45.151421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.281 [2024-11-20 16:04:45.151432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.281 [2024-11-20 16:04:45.159426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.281 [2024-11-20 16:04:45.159438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.281 [2024-11-20 16:04:45.167448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.281 [2024-11-20 16:04:45.167459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.281 [2024-11-20 16:04:45.179481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.281 [2024-11-20 16:04:45.179495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.281 [2024-11-20 16:04:45.191511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.281 [2024-11-20 16:04:45.191522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.281 [2024-11-20 16:04:45.199527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.281 [2024-11-20 16:04:45.199535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.281 [2024-11-20 16:04:45.207546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.281 [2024-11-20 16:04:45.207554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.541 [2024-11-20 16:04:45.215566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.541 [2024-11-20 16:04:45.215574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.541 [2024-11-20 16:04:45.227603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.541 [2024-11-20 16:04:45.227615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.541 [2024-11-20 16:04:45.235619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.541 [2024-11-20 16:04:45.235628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.541 [2024-11-20 16:04:45.243640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.541 [2024-11-20 16:04:45.243647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1127999) - No such process 00:10:09.541 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1127999 00:10:09.541 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.541 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.541 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:09.541 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.541 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:09.541 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.541 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:09.541 delay0 00:10:09.541 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.541 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:09.541 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.541 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:09.541 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.541 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:09.541 [2024-11-20 16:04:45.407198] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:16.121 [2024-11-20 16:04:51.501449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c2c00 is same with the state(6) to be set 00:10:16.121 Initializing NVMe Controllers 00:10:16.121 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:16.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:16.121 Initialization complete. Launching workers. 00:10:16.121 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 162 00:10:16.122 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 449, failed to submit 33 00:10:16.122 success 280, unsuccessful 169, failed 0 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.122 rmmod nvme_tcp 00:10:16.122 rmmod nvme_fabrics 00:10:16.122 rmmod nvme_keyring 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1125785 ']' 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1125785 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1125785 ']' 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1125785 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1125785 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1125785' 00:10:16.122 killing process with pid 1125785 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1125785 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1125785 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.122 16:04:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.034 16:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:18.034 00:10:18.034 real 0m33.628s 00:10:18.034 user 0m44.832s 00:10:18.034 sys 0m11.338s 00:10:18.034 16:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.034 16:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.034 ************************************ 00:10:18.034 END TEST nvmf_zcopy 00:10:18.034 ************************************ 00:10:18.034 16:04:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:18.034 16:04:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:18.034 16:04:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.034 16:04:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:18.034 ************************************ 00:10:18.034 START TEST nvmf_nmic 00:10:18.034 ************************************ 00:10:18.034 16:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:18.295 * Looking for test storage... 00:10:18.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:18.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.295 --rc genhtml_branch_coverage=1 00:10:18.295 --rc genhtml_function_coverage=1 00:10:18.295 --rc genhtml_legend=1 00:10:18.295 --rc geninfo_all_blocks=1 00:10:18.295 --rc geninfo_unexecuted_blocks=1 00:10:18.295 00:10:18.295 ' 00:10:18.295 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:18.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.296 --rc genhtml_branch_coverage=1 00:10:18.296 --rc genhtml_function_coverage=1 00:10:18.296 --rc genhtml_legend=1 00:10:18.296 --rc geninfo_all_blocks=1 00:10:18.296 --rc geninfo_unexecuted_blocks=1 00:10:18.296 00:10:18.296 ' 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:18.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.296 --rc genhtml_branch_coverage=1 00:10:18.296 --rc genhtml_function_coverage=1 00:10:18.296 --rc genhtml_legend=1 00:10:18.296 --rc geninfo_all_blocks=1 00:10:18.296 --rc geninfo_unexecuted_blocks=1 00:10:18.296 00:10:18.296 ' 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:18.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.296 --rc genhtml_branch_coverage=1 00:10:18.296 --rc genhtml_function_coverage=1 00:10:18.296 --rc genhtml_legend=1 00:10:18.296 --rc geninfo_all_blocks=1 00:10:18.296 --rc geninfo_unexecuted_blocks=1 00:10:18.296 00:10:18.296 ' 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:18.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:18.296 16:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:26.443 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:26.443 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:26.443 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:26.444 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:26.444 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:26.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:10:26.444 00:10:26.444 --- 10.0.0.2 ping statistics --- 00:10:26.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.444 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:26.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:10:26.444 00:10:26.444 --- 10.0.0.1 ping statistics --- 00:10:26.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.444 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1134518 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1134518 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1134518 ']' 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:26.444 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.444 [2024-11-20 16:05:01.727141] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:10:26.444 [2024-11-20 16:05:01.727226] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.444 [2024-11-20 16:05:01.830978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:26.444 [2024-11-20 16:05:01.887246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.444 [2024-11-20 16:05:01.887302] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.444 [2024-11-20 16:05:01.887311] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.445 [2024-11-20 16:05:01.887318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.445 [2024-11-20 16:05:01.887325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.445 [2024-11-20 16:05:01.889667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.445 [2024-11-20 16:05:01.889829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.445 [2024-11-20 16:05:01.889991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.445 [2024-11-20 16:05:01.889992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:26.706 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.706 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:26.706 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:26.706 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:26.706 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.706 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.706 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:26.706 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.706 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.706 [2024-11-20 16:05:02.607030] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.706 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.706 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:26.706 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.706 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.968 Malloc0 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.968 [2024-11-20 16:05:02.680236] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:26.968 test case1: single bdev can't be used in multiple subsystems 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.968 [2024-11-20 16:05:02.716018] bdev.c:8259:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:26.968 [2024-11-20 16:05:02.716048] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:26.968 [2024-11-20 16:05:02.716057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.968 request: 00:10:26.968 { 00:10:26.968 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:26.968 "namespace": { 00:10:26.968 "bdev_name": "Malloc0", 00:10:26.968 "no_auto_visible": false 00:10:26.968 }, 00:10:26.968 "method": "nvmf_subsystem_add_ns", 00:10:26.968 "req_id": 1 00:10:26.968 } 00:10:26.968 Got JSON-RPC error response 00:10:26.968 response: 00:10:26.968 { 00:10:26.968 "code": -32602, 00:10:26.968 "message": "Invalid parameters" 00:10:26.968 } 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:26.968 Adding namespace failed - expected result. 00:10:26.968 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:26.969 test case2: host connect to nvmf target in multiple paths 00:10:26.969 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:26.969 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.969 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.969 [2024-11-20 16:05:02.728248] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:26.969 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.969 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:28.356 16:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:30.268 16:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:30.268 16:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:30.268 16:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:30.268 16:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:30.268 16:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:32.196 16:05:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:32.196 16:05:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:32.196 16:05:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:32.196 16:05:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:32.196 16:05:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:32.196 16:05:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:32.196 16:05:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:32.196 [global] 00:10:32.196 thread=1 00:10:32.196 invalidate=1 00:10:32.196 rw=write 00:10:32.196 time_based=1 00:10:32.196 runtime=1 00:10:32.196 ioengine=libaio 00:10:32.196 direct=1 00:10:32.196 bs=4096 00:10:32.196 iodepth=1 00:10:32.196 norandommap=0 00:10:32.196 numjobs=1 00:10:32.196 00:10:32.196 verify_dump=1 00:10:32.196 verify_backlog=512 00:10:32.196 verify_state_save=0 00:10:32.196 do_verify=1 00:10:32.196 verify=crc32c-intel 00:10:32.196 [job0] 00:10:32.196 filename=/dev/nvme0n1 00:10:32.196 Could not set queue depth (nvme0n1) 00:10:32.462 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:32.462 fio-3.35 00:10:32.462 Starting 1 thread 00:10:33.848 00:10:33.848 job0: (groupid=0, jobs=1): err= 0: pid=1136076: Wed Nov 20 16:05:09 2024 00:10:33.848 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:33.848 slat (nsec): min=7553, max=62072, avg=26769.38, stdev=3399.87 00:10:33.848 clat (usec): min=677, max=1205, avg=976.36, stdev=65.61 00:10:33.848 lat (usec): min=704, max=1231, avg=1003.13, stdev=65.58 00:10:33.848 clat percentiles (usec): 00:10:33.848 | 1.00th=[ 791], 5.00th=[ 873], 10.00th=[ 889], 20.00th=[ 938], 00:10:33.848 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 979], 60.00th=[ 996], 00:10:33.848 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1057], 95.00th=[ 1074], 00:10:33.848 | 99.00th=[ 1139], 99.50th=[ 1205], 99.90th=[ 1205], 99.95th=[ 1205], 00:10:33.848 | 99.99th=[ 1205] 00:10:33.848 write: IOPS=871, BW=3485KiB/s (3568kB/s)(3488KiB/1001msec); 0 zone resets 00:10:33.848 slat (nsec): min=8901, max=66957, avg=28806.16, stdev=10482.09 00:10:33.848 clat (usec): min=171, max=769, avg=517.38, stdev=104.01 00:10:33.848 lat (usec): min=181, max=802, avg=546.19, stdev=108.95 00:10:33.848 clat percentiles (usec): 00:10:33.848 | 1.00th=[ 262], 5.00th=[ 338], 10.00th=[ 379], 20.00th=[ 437], 00:10:33.848 | 30.00th=[ 465], 40.00th=[ 498], 50.00th=[ 529], 60.00th=[ 545], 00:10:33.848 | 70.00th=[ 570], 80.00th=[ 619], 90.00th=[ 652], 95.00th=[ 668], 00:10:33.848 | 99.00th=[ 725], 99.50th=[ 750], 99.90th=[ 766], 99.95th=[ 766], 00:10:33.848 | 99.99th=[ 766] 00:10:33.848 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:33.848 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:33.848 lat (usec) : 250=0.36%, 500=25.29%, 750=37.50%, 1000=24.13% 00:10:33.848 lat (msec) : 2=12.72% 00:10:33.848 cpu : usr=2.90%, sys=5.10%, ctx=1384, majf=0, minf=1 00:10:33.848 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.848 issued rwts: total=512,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.848 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.848 00:10:33.848 Run status group 0 (all jobs): 00:10:33.848 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:10:33.848 WRITE: bw=3485KiB/s (3568kB/s), 3485KiB/s-3485KiB/s (3568kB/s-3568kB/s), io=3488KiB (3572kB), run=1001-1001msec 00:10:33.848 00:10:33.848 Disk stats (read/write): 00:10:33.848 nvme0n1: ios=562/697, merge=0/0, ticks=502/294, in_queue=796, util=92.69% 00:10:33.848 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:33.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:33.848 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:33.848 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:33.848 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:33.848 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:33.848 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:33.848 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:33.848 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:33.848 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:33.848 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:33.848 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:33.848 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:33.848 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:33.848 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:33.848 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:33.849 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:33.849 rmmod nvme_tcp 00:10:33.849 rmmod nvme_fabrics 00:10:33.849 rmmod nvme_keyring 00:10:33.849 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:33.849 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:33.849 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:33.849 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1134518 ']' 00:10:33.849 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1134518 00:10:33.849 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1134518 ']' 00:10:33.849 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1134518 00:10:33.849 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:33.849 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.849 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1134518 00:10:33.849 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.849 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.849 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1134518' 00:10:33.849 killing process with pid 1134518 00:10:33.849 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1134518 00:10:33.849 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1134518 00:10:34.110 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:34.110 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:34.110 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:34.110 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:34.110 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:34.110 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:34.110 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:34.110 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:34.110 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:34.110 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.110 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.110 16:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.023 16:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:36.023 00:10:36.023 real 0m17.987s 00:10:36.023 user 0m47.903s 00:10:36.023 sys 0m6.702s 00:10:36.023 16:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.023 16:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.023 ************************************ 00:10:36.023 END TEST nvmf_nmic 00:10:36.023 ************************************ 00:10:36.023 16:05:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:36.023 16:05:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:36.023 16:05:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.023 16:05:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:36.286 ************************************ 00:10:36.286 START TEST nvmf_fio_target 00:10:36.286 ************************************ 00:10:36.286 16:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:36.286 * Looking for test storage... 00:10:36.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:36.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.286 --rc genhtml_branch_coverage=1 00:10:36.286 --rc genhtml_function_coverage=1 00:10:36.286 --rc genhtml_legend=1 00:10:36.286 --rc geninfo_all_blocks=1 00:10:36.286 --rc geninfo_unexecuted_blocks=1 00:10:36.286 00:10:36.286 ' 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:36.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.286 --rc genhtml_branch_coverage=1 00:10:36.286 --rc genhtml_function_coverage=1 00:10:36.286 --rc genhtml_legend=1 00:10:36.286 --rc geninfo_all_blocks=1 00:10:36.286 --rc geninfo_unexecuted_blocks=1 00:10:36.286 00:10:36.286 ' 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:36.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.286 --rc genhtml_branch_coverage=1 00:10:36.286 --rc genhtml_function_coverage=1 00:10:36.286 --rc genhtml_legend=1 00:10:36.286 --rc geninfo_all_blocks=1 00:10:36.286 --rc geninfo_unexecuted_blocks=1 00:10:36.286 00:10:36.286 ' 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:36.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.286 --rc genhtml_branch_coverage=1 00:10:36.286 --rc genhtml_function_coverage=1 00:10:36.286 --rc genhtml_legend=1 00:10:36.286 --rc geninfo_all_blocks=1 00:10:36.286 --rc geninfo_unexecuted_blocks=1 00:10:36.286 00:10:36.286 ' 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:36.286 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:36.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.287 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.548 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:36.548 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:36.548 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:36.548 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:44.695 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:44.695 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:44.695 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:44.696 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:44.696 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:44.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.707 ms 00:10:44.696 00:10:44.696 --- 10.0.0.2 ping statistics --- 00:10:44.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.696 rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:44.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:10:44.696 00:10:44.696 --- 10.0.0.1 ping statistics --- 00:10:44.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.696 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1140615 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1140615 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1140615 ']' 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.696 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.696 [2024-11-20 16:05:19.885519] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:10:44.696 [2024-11-20 16:05:19.885589] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.696 [2024-11-20 16:05:19.987527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:44.696 [2024-11-20 16:05:20.043397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.696 [2024-11-20 16:05:20.043452] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.696 [2024-11-20 16:05:20.043461] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.696 [2024-11-20 16:05:20.043469] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.696 [2024-11-20 16:05:20.043476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.696 [2024-11-20 16:05:20.045588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.696 [2024-11-20 16:05:20.045628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.697 [2024-11-20 16:05:20.045790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.697 [2024-11-20 16:05:20.045791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.958 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.958 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:44.958 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:44.958 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:44.958 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.958 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.958 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:45.220 [2024-11-20 16:05:20.925580] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:45.220 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:45.481 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:45.481 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:45.742 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:45.742 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:45.742 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:45.742 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:46.003 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:46.003 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:46.264 16:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:46.525 16:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:46.525 16:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:46.525 16:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:46.525 16:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:46.785 16:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:46.785 16:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:47.046 16:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:47.308 16:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:47.308 16:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:47.308 16:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:47.308 16:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:47.569 16:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.829 [2024-11-20 16:05:23.533960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.829 16:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:47.829 16:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:48.089 16:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:49.999 16:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:49.999 16:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:49.999 16:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:49.999 16:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:49.999 16:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:49.999 16:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:51.922 16:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:51.922 16:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:51.922 16:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:51.922 16:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:51.922 16:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:51.922 16:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:51.922 16:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:51.922 [global] 00:10:51.922 thread=1 00:10:51.922 invalidate=1 00:10:51.922 rw=write 00:10:51.922 time_based=1 00:10:51.922 runtime=1 00:10:51.922 ioengine=libaio 00:10:51.922 direct=1 00:10:51.922 bs=4096 00:10:51.922 iodepth=1 00:10:51.922 norandommap=0 00:10:51.922 numjobs=1 00:10:51.922 00:10:51.922 verify_dump=1 00:10:51.922 verify_backlog=512 00:10:51.922 verify_state_save=0 00:10:51.922 do_verify=1 00:10:51.922 verify=crc32c-intel 00:10:51.922 [job0] 00:10:51.922 filename=/dev/nvme0n1 00:10:51.922 [job1] 00:10:51.922 filename=/dev/nvme0n2 00:10:51.922 [job2] 00:10:51.922 filename=/dev/nvme0n3 00:10:51.922 [job3] 00:10:51.922 filename=/dev/nvme0n4 00:10:51.922 Could not set queue depth (nvme0n1) 00:10:51.922 Could not set queue depth (nvme0n2) 00:10:51.922 Could not set queue depth (nvme0n3) 00:10:51.922 Could not set queue depth (nvme0n4) 00:10:52.182 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.182 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.182 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.182 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.182 fio-3.35 00:10:52.182 Starting 4 threads 00:10:53.573 00:10:53.573 job0: (groupid=0, jobs=1): err= 0: pid=1142351: Wed Nov 20 16:05:29 2024 00:10:53.573 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:53.573 slat (nsec): min=10237, max=63079, avg=27938.33, stdev=2671.11 00:10:53.573 clat (usec): min=769, max=1187, avg=981.34, stdev=68.67 00:10:53.573 lat (usec): min=797, max=1215, avg=1009.28, stdev=68.57 00:10:53.573 clat percentiles (usec): 00:10:53.573 | 1.00th=[ 799], 5.00th=[ 857], 10.00th=[ 889], 20.00th=[ 930], 00:10:53.573 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1004], 00:10:53.573 | 70.00th=[ 1012], 80.00th=[ 1037], 90.00th=[ 1057], 95.00th=[ 1090], 00:10:53.573 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1188], 99.95th=[ 1188], 00:10:53.573 | 99.99th=[ 1188] 00:10:53.573 write: IOPS=773, BW=3093KiB/s (3167kB/s)(3096KiB/1001msec); 0 zone resets 00:10:53.573 slat (nsec): min=9583, max=72885, avg=31746.13, stdev=10293.37 00:10:53.573 clat (usec): min=142, max=863, avg=579.41, stdev=122.39 00:10:53.573 lat (usec): min=153, max=903, avg=611.16, stdev=126.22 00:10:53.573 clat percentiles (usec): 00:10:53.573 | 1.00th=[ 260], 5.00th=[ 355], 10.00th=[ 408], 20.00th=[ 478], 00:10:53.573 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 619], 00:10:53.573 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 766], 00:10:53.573 | 99.00th=[ 816], 99.50th=[ 840], 99.90th=[ 865], 99.95th=[ 865], 00:10:53.573 | 99.99th=[ 865] 00:10:53.573 bw ( KiB/s): min= 4096, max= 4096, per=33.67%, avg=4096.00, stdev= 0.00, samples=1 00:10:53.573 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:53.573 lat (usec) : 250=0.23%, 500=14.77%, 750=41.76%, 1000=27.14% 00:10:53.573 lat (msec) : 2=16.10% 00:10:53.573 cpu : usr=3.60%, sys=4.30%, ctx=1287, majf=0, minf=1 00:10:53.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.573 issued rwts: total=512,774,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.573 job1: (groupid=0, jobs=1): err= 0: pid=1142352: Wed Nov 20 16:05:29 2024 00:10:53.573 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:53.573 slat (nsec): min=7678, max=62078, avg=26896.50, stdev=3631.82 00:10:53.573 clat (usec): min=764, max=1236, avg=1023.57, stdev=81.08 00:10:53.573 lat (usec): min=790, max=1262, avg=1050.46, stdev=80.73 00:10:53.573 clat percentiles (usec): 00:10:53.573 | 1.00th=[ 832], 5.00th=[ 898], 10.00th=[ 922], 20.00th=[ 955], 00:10:53.573 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1029], 60.00th=[ 1045], 00:10:53.573 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1156], 00:10:53.573 | 99.00th=[ 1205], 99.50th=[ 1221], 99.90th=[ 1237], 99.95th=[ 1237], 00:10:53.573 | 99.99th=[ 1237] 00:10:53.573 write: IOPS=733, BW=2933KiB/s (3003kB/s)(2936KiB/1001msec); 0 zone resets 00:10:53.573 slat (nsec): min=9773, max=55046, avg=30881.47, stdev=10284.29 00:10:53.573 clat (usec): min=125, max=942, avg=585.81, stdev=128.71 00:10:53.573 lat (usec): min=137, max=977, avg=616.69, stdev=133.06 00:10:53.573 clat percentiles (usec): 00:10:53.573 | 1.00th=[ 253], 5.00th=[ 367], 10.00th=[ 408], 20.00th=[ 482], 00:10:53.573 | 30.00th=[ 519], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 619], 00:10:53.573 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 791], 00:10:53.573 | 99.00th=[ 873], 99.50th=[ 914], 99.90th=[ 947], 99.95th=[ 947], 00:10:53.573 | 99.99th=[ 947] 00:10:53.573 bw ( KiB/s): min= 4096, max= 4096, per=33.67%, avg=4096.00, stdev= 0.00, samples=1 00:10:53.573 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:53.573 lat (usec) : 250=0.48%, 500=13.96%, 750=39.57%, 1000=20.39% 00:10:53.573 lat (msec) : 2=25.60% 00:10:53.573 cpu : usr=1.90%, sys=3.70%, ctx=1249, majf=0, minf=1 00:10:53.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.573 issued rwts: total=512,734,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.573 job2: (groupid=0, jobs=1): err= 0: pid=1142354: Wed Nov 20 16:05:29 2024 00:10:53.573 read: IOPS=268, BW=1075KiB/s (1101kB/s)(1076KiB/1001msec) 00:10:53.573 slat (nsec): min=25007, max=43587, avg=26341.68, stdev=2310.51 00:10:53.573 clat (usec): min=710, max=42090, avg=2830.95, stdev=8458.10 00:10:53.573 lat (usec): min=736, max=42115, avg=2857.29, stdev=8457.87 00:10:53.573 clat percentiles (usec): 00:10:53.573 | 1.00th=[ 734], 5.00th=[ 799], 10.00th=[ 873], 20.00th=[ 955], 00:10:53.573 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1020], 60.00th=[ 1029], 00:10:53.573 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1188], 95.00th=[ 1254], 00:10:53.573 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:53.573 | 99.99th=[42206] 00:10:53.573 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:53.573 slat (nsec): min=9353, max=69974, avg=27950.15, stdev=9771.82 00:10:53.573 clat (usec): min=124, max=756, avg=414.43, stdev=124.86 00:10:53.573 lat (usec): min=134, max=808, avg=442.38, stdev=127.57 00:10:53.573 clat percentiles (usec): 00:10:53.573 | 1.00th=[ 151], 5.00th=[ 219], 10.00th=[ 258], 20.00th=[ 318], 00:10:53.573 | 30.00th=[ 334], 40.00th=[ 367], 50.00th=[ 408], 60.00th=[ 445], 00:10:53.573 | 70.00th=[ 474], 80.00th=[ 523], 90.00th=[ 586], 95.00th=[ 635], 00:10:53.573 | 99.00th=[ 717], 99.50th=[ 750], 99.90th=[ 758], 99.95th=[ 758], 00:10:53.573 | 99.99th=[ 758] 00:10:53.573 bw ( KiB/s): min= 4096, max= 4096, per=33.67%, avg=4096.00, stdev= 0.00, samples=1 00:10:53.573 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:53.573 lat (usec) : 250=6.02%, 500=44.30%, 750=15.49%, 1000=13.44% 00:10:53.573 lat (msec) : 2=19.21%, 50=1.54% 00:10:53.573 cpu : usr=1.50%, sys=1.80%, ctx=782, majf=0, minf=1 00:10:53.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.573 issued rwts: total=269,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.573 job3: (groupid=0, jobs=1): err= 0: pid=1142355: Wed Nov 20 16:05:29 2024 00:10:53.573 read: IOPS=918, BW=3672KiB/s (3760kB/s)(3676KiB/1001msec) 00:10:53.573 slat (nsec): min=7020, max=62612, avg=26008.62, stdev=5209.71 00:10:53.573 clat (usec): min=235, max=1042, avg=670.44, stdev=174.74 00:10:53.573 lat (usec): min=248, max=1069, avg=696.45, stdev=175.30 00:10:53.573 clat percentiles (usec): 00:10:53.573 | 1.00th=[ 306], 5.00th=[ 371], 10.00th=[ 441], 20.00th=[ 515], 00:10:53.573 | 30.00th=[ 578], 40.00th=[ 635], 50.00th=[ 668], 60.00th=[ 701], 00:10:53.573 | 70.00th=[ 742], 80.00th=[ 865], 90.00th=[ 922], 95.00th=[ 947], 00:10:53.573 | 99.00th=[ 1004], 99.50th=[ 1020], 99.90th=[ 1045], 99.95th=[ 1045], 00:10:53.573 | 99.99th=[ 1045] 00:10:53.573 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:53.573 slat (nsec): min=9749, max=64492, avg=26889.45, stdev=12042.06 00:10:53.573 clat (usec): min=107, max=823, avg=310.17, stdev=159.61 00:10:53.573 lat (usec): min=118, max=858, avg=337.06, stdev=166.09 00:10:53.573 clat percentiles (usec): 00:10:53.573 | 1.00th=[ 119], 5.00th=[ 126], 10.00th=[ 133], 20.00th=[ 147], 00:10:53.573 | 30.00th=[ 180], 40.00th=[ 251], 50.00th=[ 277], 60.00th=[ 302], 00:10:53.573 | 70.00th=[ 392], 80.00th=[ 453], 90.00th=[ 537], 95.00th=[ 635], 00:10:53.573 | 99.00th=[ 709], 99.50th=[ 742], 99.90th=[ 799], 99.95th=[ 824], 00:10:53.573 | 99.99th=[ 824] 00:10:53.573 bw ( KiB/s): min= 4096, max= 4096, per=33.67%, avg=4096.00, stdev= 0.00, samples=1 00:10:53.573 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:53.573 lat (usec) : 250=20.69%, 500=33.35%, 750=31.86%, 1000=13.59% 00:10:53.574 lat (msec) : 2=0.51% 00:10:53.574 cpu : usr=2.70%, sys=5.30%, ctx=1944, majf=0, minf=1 00:10:53.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.574 issued rwts: total=919,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.574 00:10:53.574 Run status group 0 (all jobs): 00:10:53.574 READ: bw=8839KiB/s (9051kB/s), 1075KiB/s-3672KiB/s (1101kB/s-3760kB/s), io=8848KiB (9060kB), run=1001-1001msec 00:10:53.574 WRITE: bw=11.9MiB/s (12.5MB/s), 2046KiB/s-4092KiB/s (2095kB/s-4190kB/s), io=11.9MiB (12.5MB), run=1001-1001msec 00:10:53.574 00:10:53.574 Disk stats (read/write): 00:10:53.574 nvme0n1: ios=531/512, merge=0/0, ticks=1446/242, in_queue=1688, util=96.69% 00:10:53.574 nvme0n2: ios=507/512, merge=0/0, ticks=1450/279, in_queue=1729, util=97.04% 00:10:53.574 nvme0n3: ios=131/512, merge=0/0, ticks=807/208, in_queue=1015, util=91.85% 00:10:53.574 nvme0n4: ios=705/1024, merge=0/0, ticks=1383/308, in_queue=1691, util=97.00% 00:10:53.574 16:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:53.574 [global] 00:10:53.574 thread=1 00:10:53.574 invalidate=1 00:10:53.574 rw=randwrite 00:10:53.574 time_based=1 00:10:53.574 runtime=1 00:10:53.574 ioengine=libaio 00:10:53.574 direct=1 00:10:53.574 bs=4096 00:10:53.574 iodepth=1 00:10:53.574 norandommap=0 00:10:53.574 numjobs=1 00:10:53.574 00:10:53.574 verify_dump=1 00:10:53.574 verify_backlog=512 00:10:53.574 verify_state_save=0 00:10:53.574 do_verify=1 00:10:53.574 verify=crc32c-intel 00:10:53.574 [job0] 00:10:53.574 filename=/dev/nvme0n1 00:10:53.574 [job1] 00:10:53.574 filename=/dev/nvme0n2 00:10:53.574 [job2] 00:10:53.574 filename=/dev/nvme0n3 00:10:53.574 [job3] 00:10:53.574 filename=/dev/nvme0n4 00:10:53.574 Could not set queue depth (nvme0n1) 00:10:53.574 Could not set queue depth (nvme0n2) 00:10:53.574 Could not set queue depth (nvme0n3) 00:10:53.574 Could not set queue depth (nvme0n4) 00:10:53.835 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.835 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.835 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.835 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.835 fio-3.35 00:10:53.835 Starting 4 threads 00:10:55.249 00:10:55.249 job0: (groupid=0, jobs=1): err= 0: pid=1142873: Wed Nov 20 16:05:30 2024 00:10:55.249 read: IOPS=17, BW=69.2KiB/s (70.8kB/s)(72.0KiB/1041msec) 00:10:55.249 slat (nsec): min=24490, max=25404, avg=24809.00, stdev=210.67 00:10:55.249 clat (usec): min=1013, max=42869, avg=39533.71, stdev=9626.79 00:10:55.249 lat (usec): min=1038, max=42894, avg=39558.52, stdev=9626.81 00:10:55.249 clat percentiles (usec): 00:10:55.249 | 1.00th=[ 1012], 5.00th=[ 1012], 10.00th=[40633], 20.00th=[41157], 00:10:55.249 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:55.249 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:10:55.249 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:55.249 | 99.99th=[42730] 00:10:55.249 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:10:55.249 slat (nsec): min=9193, max=50082, avg=28481.57, stdev=7910.12 00:10:55.249 clat (usec): min=258, max=902, avg=605.51, stdev=128.92 00:10:55.249 lat (usec): min=269, max=933, avg=633.99, stdev=131.15 00:10:55.249 clat percentiles (usec): 00:10:55.249 | 1.00th=[ 318], 5.00th=[ 367], 10.00th=[ 433], 20.00th=[ 490], 00:10:55.249 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:10:55.249 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 766], 95.00th=[ 807], 00:10:55.249 | 99.00th=[ 873], 99.50th=[ 889], 99.90th=[ 906], 99.95th=[ 906], 00:10:55.249 | 99.99th=[ 906] 00:10:55.249 bw ( KiB/s): min= 4096, max= 4096, per=35.71%, avg=4096.00, stdev= 0.00, samples=1 00:10:55.249 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:55.249 lat (usec) : 500=21.13%, 750=63.58%, 1000=11.89% 00:10:55.249 lat (msec) : 2=0.19%, 50=3.21% 00:10:55.249 cpu : usr=0.87%, sys=1.35%, ctx=530, majf=0, minf=1 00:10:55.249 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:55.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.249 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.249 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:55.249 job1: (groupid=0, jobs=1): err= 0: pid=1142874: Wed Nov 20 16:05:30 2024 00:10:55.249 read: IOPS=260, BW=1043KiB/s (1068kB/s)(1044KiB/1001msec) 00:10:55.249 slat (nsec): min=6824, max=44593, avg=23964.48, stdev=7461.70 00:10:55.249 clat (usec): min=372, max=43087, avg=2857.62, stdev=9118.24 00:10:55.249 lat (usec): min=398, max=43113, avg=2881.58, stdev=9118.64 00:10:55.249 clat percentiles (usec): 00:10:55.249 | 1.00th=[ 429], 5.00th=[ 537], 10.00th=[ 578], 20.00th=[ 619], 00:10:55.249 | 30.00th=[ 644], 40.00th=[ 685], 50.00th=[ 725], 60.00th=[ 758], 00:10:55.249 | 70.00th=[ 775], 80.00th=[ 799], 90.00th=[ 840], 95.00th=[28443], 00:10:55.249 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:10:55.249 | 99.99th=[43254] 00:10:55.249 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:55.249 slat (nsec): min=9512, max=68253, avg=28846.36, stdev=9216.90 00:10:55.249 clat (usec): min=226, max=684, avg=444.31, stdev=87.97 00:10:55.249 lat (usec): min=236, max=717, avg=473.16, stdev=92.68 00:10:55.249 clat percentiles (usec): 00:10:55.249 | 1.00th=[ 265], 5.00th=[ 281], 10.00th=[ 310], 20.00th=[ 359], 00:10:55.249 | 30.00th=[ 400], 40.00th=[ 437], 50.00th=[ 457], 60.00th=[ 478], 00:10:55.249 | 70.00th=[ 494], 80.00th=[ 515], 90.00th=[ 545], 95.00th=[ 578], 00:10:55.249 | 99.00th=[ 635], 99.50th=[ 660], 99.90th=[ 685], 99.95th=[ 685], 00:10:55.249 | 99.99th=[ 685] 00:10:55.249 bw ( KiB/s): min= 4096, max= 4096, per=35.71%, avg=4096.00, stdev= 0.00, samples=1 00:10:55.249 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:55.249 lat (usec) : 250=0.13%, 500=49.55%, 750=35.83%, 1000=12.68% 00:10:55.249 lat (msec) : 50=1.81% 00:10:55.249 cpu : usr=1.30%, sys=2.00%, ctx=773, majf=0, minf=1 00:10:55.249 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:55.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.249 issued rwts: total=261,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.249 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:55.249 job2: (groupid=0, jobs=1): err= 0: pid=1142880: Wed Nov 20 16:05:30 2024 00:10:55.249 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:55.249 slat (nsec): min=7257, max=45808, avg=25563.66, stdev=4090.30 00:10:55.249 clat (usec): min=559, max=1771, avg=939.53, stdev=135.78 00:10:55.249 lat (usec): min=568, max=1797, avg=965.09, stdev=136.37 00:10:55.249 clat percentiles (usec): 00:10:55.249 | 1.00th=[ 603], 5.00th=[ 701], 10.00th=[ 750], 20.00th=[ 816], 00:10:55.249 | 30.00th=[ 873], 40.00th=[ 922], 50.00th=[ 963], 60.00th=[ 996], 00:10:55.249 | 70.00th=[ 1020], 80.00th=[ 1045], 90.00th=[ 1090], 95.00th=[ 1139], 00:10:55.249 | 99.00th=[ 1205], 99.50th=[ 1221], 99.90th=[ 1778], 99.95th=[ 1778], 00:10:55.249 | 99.99th=[ 1778] 00:10:55.249 write: IOPS=936, BW=3744KiB/s (3834kB/s)(3748KiB/1001msec); 0 zone resets 00:10:55.249 slat (nsec): min=9472, max=50679, avg=29531.53, stdev=8675.53 00:10:55.249 clat (usec): min=174, max=919, avg=498.25, stdev=110.87 00:10:55.249 lat (usec): min=206, max=951, avg=527.78, stdev=113.76 00:10:55.249 clat percentiles (usec): 00:10:55.249 | 1.00th=[ 265], 5.00th=[ 322], 10.00th=[ 367], 20.00th=[ 408], 00:10:55.249 | 30.00th=[ 441], 40.00th=[ 469], 50.00th=[ 494], 60.00th=[ 523], 00:10:55.250 | 70.00th=[ 553], 80.00th=[ 578], 90.00th=[ 644], 95.00th=[ 701], 00:10:55.250 | 99.00th=[ 783], 99.50th=[ 824], 99.90th=[ 922], 99.95th=[ 922], 00:10:55.250 | 99.99th=[ 922] 00:10:55.250 bw ( KiB/s): min= 4096, max= 4096, per=35.71%, avg=4096.00, stdev= 0.00, samples=1 00:10:55.250 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:55.250 lat (usec) : 250=0.35%, 500=33.33%, 750=33.26%, 1000=19.74% 00:10:55.250 lat (msec) : 2=13.32% 00:10:55.250 cpu : usr=2.30%, sys=4.00%, ctx=1449, majf=0, minf=1 00:10:55.250 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:55.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.250 issued rwts: total=512,937,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.250 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:55.250 job3: (groupid=0, jobs=1): err= 0: pid=1142882: Wed Nov 20 16:05:30 2024 00:10:55.250 read: IOPS=674, BW=2697KiB/s (2762kB/s)(2700KiB/1001msec) 00:10:55.250 slat (nsec): min=7140, max=60403, avg=22923.01, stdev=8149.03 00:10:55.250 clat (usec): min=324, max=985, avg=755.40, stdev=92.68 00:10:55.250 lat (usec): min=337, max=1011, avg=778.33, stdev=94.05 00:10:55.250 clat percentiles (usec): 00:10:55.250 | 1.00th=[ 478], 5.00th=[ 594], 10.00th=[ 627], 20.00th=[ 685], 00:10:55.250 | 30.00th=[ 725], 40.00th=[ 750], 50.00th=[ 775], 60.00th=[ 791], 00:10:55.250 | 70.00th=[ 807], 80.00th=[ 824], 90.00th=[ 848], 95.00th=[ 881], 00:10:55.250 | 99.00th=[ 930], 99.50th=[ 947], 99.90th=[ 988], 99.95th=[ 988], 00:10:55.250 | 99.99th=[ 988] 00:10:55.250 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:55.250 slat (nsec): min=9534, max=50628, avg=28730.23, stdev=9061.32 00:10:55.250 clat (usec): min=181, max=913, avg=422.51, stdev=94.74 00:10:55.250 lat (usec): min=205, max=945, avg=451.24, stdev=97.06 00:10:55.250 clat percentiles (usec): 00:10:55.250 | 1.00th=[ 217], 5.00th=[ 273], 10.00th=[ 314], 20.00th=[ 338], 00:10:55.250 | 30.00th=[ 363], 40.00th=[ 396], 50.00th=[ 429], 60.00th=[ 449], 00:10:55.250 | 70.00th=[ 469], 80.00th=[ 498], 90.00th=[ 537], 95.00th=[ 570], 00:10:55.250 | 99.00th=[ 668], 99.50th=[ 734], 99.90th=[ 848], 99.95th=[ 914], 00:10:55.250 | 99.99th=[ 914] 00:10:55.250 bw ( KiB/s): min= 4096, max= 4096, per=35.71%, avg=4096.00, stdev= 0.00, samples=1 00:10:55.250 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:55.250 lat (usec) : 250=1.65%, 500=47.79%, 750=26.72%, 1000=23.84% 00:10:55.250 cpu : usr=2.40%, sys=4.60%, ctx=1700, majf=0, minf=1 00:10:55.250 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:55.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.250 issued rwts: total=675,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.250 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:55.250 00:10:55.250 Run status group 0 (all jobs): 00:10:55.250 READ: bw=5633KiB/s (5768kB/s), 69.2KiB/s-2697KiB/s (70.8kB/s-2762kB/s), io=5864KiB (6005kB), run=1001-1041msec 00:10:55.250 WRITE: bw=11.2MiB/s (11.7MB/s), 1967KiB/s-4092KiB/s (2015kB/s-4190kB/s), io=11.7MiB (12.2MB), run=1001-1041msec 00:10:55.250 00:10:55.250 Disk stats (read/write): 00:10:55.250 nvme0n1: ios=63/512, merge=0/0, ticks=540/296, in_queue=836, util=86.57% 00:10:55.250 nvme0n2: ios=295/512, merge=0/0, ticks=628/214, in_queue=842, util=87.67% 00:10:55.250 nvme0n3: ios=560/652, merge=0/0, ticks=594/307, in_queue=901, util=92.49% 00:10:55.250 nvme0n4: ios=512/899, merge=0/0, ticks=384/358, in_queue=742, util=89.51% 00:10:55.250 16:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:55.250 [global] 00:10:55.250 thread=1 00:10:55.250 invalidate=1 00:10:55.250 rw=write 00:10:55.250 time_based=1 00:10:55.250 runtime=1 00:10:55.250 ioengine=libaio 00:10:55.250 direct=1 00:10:55.250 bs=4096 00:10:55.250 iodepth=128 00:10:55.250 norandommap=0 00:10:55.250 numjobs=1 00:10:55.250 00:10:55.250 verify_dump=1 00:10:55.250 verify_backlog=512 00:10:55.250 verify_state_save=0 00:10:55.250 do_verify=1 00:10:55.250 verify=crc32c-intel 00:10:55.250 [job0] 00:10:55.250 filename=/dev/nvme0n1 00:10:55.250 [job1] 00:10:55.250 filename=/dev/nvme0n2 00:10:55.250 [job2] 00:10:55.250 filename=/dev/nvme0n3 00:10:55.250 [job3] 00:10:55.250 filename=/dev/nvme0n4 00:10:55.250 Could not set queue depth (nvme0n1) 00:10:55.250 Could not set queue depth (nvme0n2) 00:10:55.250 Could not set queue depth (nvme0n3) 00:10:55.250 Could not set queue depth (nvme0n4) 00:10:55.515 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:55.515 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:55.515 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:55.515 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:55.515 fio-3.35 00:10:55.515 Starting 4 threads 00:10:56.905 00:10:56.905 job0: (groupid=0, jobs=1): err= 0: pid=1143400: Wed Nov 20 16:05:32 2024 00:10:56.905 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:10:56.905 slat (nsec): min=881, max=14466k, avg=72645.23, stdev=504275.00 00:10:56.905 clat (usec): min=3491, max=25499, avg=9559.42, stdev=2792.11 00:10:56.905 lat (usec): min=3494, max=25525, avg=9632.06, stdev=2830.92 00:10:56.905 clat percentiles (usec): 00:10:56.905 | 1.00th=[ 5014], 5.00th=[ 6063], 10.00th=[ 6980], 20.00th=[ 7570], 00:10:56.905 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8586], 60.00th=[ 9503], 00:10:56.905 | 70.00th=[10290], 80.00th=[11600], 90.00th=[14746], 95.00th=[15401], 00:10:56.905 | 99.00th=[15926], 99.50th=[16909], 99.90th=[21627], 99.95th=[24773], 00:10:56.905 | 99.99th=[25560] 00:10:56.905 write: IOPS=6979, BW=27.3MiB/s (28.6MB/s)(27.4MiB/1004msec); 0 zone resets 00:10:56.905 slat (nsec): min=1537, max=8414.7k, avg=66932.41, stdev=377789.45 00:10:56.905 clat (usec): min=666, max=32259, avg=9092.26, stdev=5203.79 00:10:56.905 lat (usec): min=698, max=32267, avg=9159.19, stdev=5237.59 00:10:56.905 clat percentiles (usec): 00:10:56.905 | 1.00th=[ 1172], 5.00th=[ 3425], 10.00th=[ 4948], 20.00th=[ 6259], 00:10:56.905 | 30.00th=[ 7046], 40.00th=[ 7373], 50.00th=[ 7767], 60.00th=[ 8291], 00:10:56.905 | 70.00th=[ 8848], 80.00th=[10028], 90.00th=[14877], 95.00th=[21890], 00:10:56.905 | 99.00th=[28443], 99.50th=[29492], 99.90th=[31851], 99.95th=[32375], 00:10:56.905 | 99.99th=[32375] 00:10:56.905 bw ( KiB/s): min=24576, max=30456, per=28.64%, avg=27516.00, stdev=4157.79, samples=2 00:10:56.905 iops : min= 6144, max= 7614, avg=6879.00, stdev=1039.45, samples=2 00:10:56.905 lat (usec) : 750=0.10%, 1000=0.19% 00:10:56.905 lat (msec) : 2=0.57%, 4=2.60%, 10=69.59%, 20=23.55%, 50=3.40% 00:10:56.905 cpu : usr=3.59%, sys=4.49%, ctx=707, majf=0, minf=1 00:10:56.905 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:56.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:56.906 issued rwts: total=6656,7007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.906 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:56.906 job1: (groupid=0, jobs=1): err= 0: pid=1143401: Wed Nov 20 16:05:32 2024 00:10:56.906 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1008msec) 00:10:56.906 slat (nsec): min=888, max=26229k, avg=99388.15, stdev=825841.51 00:10:56.906 clat (usec): min=1201, max=75176, avg=12675.15, stdev=10529.69 00:10:56.906 lat (usec): min=1503, max=75201, avg=12774.54, stdev=10620.64 00:10:56.906 clat percentiles (usec): 00:10:56.906 | 1.00th=[ 2376], 5.00th=[ 6194], 10.00th=[ 6849], 20.00th=[ 7635], 00:10:56.906 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9503], 00:10:56.906 | 70.00th=[10290], 80.00th=[16909], 90.00th=[21365], 95.00th=[33162], 00:10:56.906 | 99.00th=[57410], 99.50th=[63701], 99.90th=[63701], 99.95th=[71828], 00:10:56.906 | 99.99th=[74974] 00:10:56.906 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec); 0 zone resets 00:10:56.906 slat (nsec): min=1569, max=16893k, avg=100487.62, stdev=761017.87 00:10:56.906 clat (usec): min=784, max=54252, avg=13274.68, stdev=9838.52 00:10:56.906 lat (usec): min=799, max=54286, avg=13375.17, stdev=9919.05 00:10:56.906 clat percentiles (usec): 00:10:56.906 | 1.00th=[ 1942], 5.00th=[ 3785], 10.00th=[ 4752], 20.00th=[ 5669], 00:10:56.906 | 30.00th=[ 7308], 40.00th=[ 8094], 50.00th=[ 8717], 60.00th=[10421], 00:10:56.906 | 70.00th=[15270], 80.00th=[21890], 90.00th=[28443], 95.00th=[33817], 00:10:56.906 | 99.00th=[44303], 99.50th=[44303], 99.90th=[46400], 99.95th=[49546], 00:10:56.906 | 99.99th=[54264] 00:10:56.906 bw ( KiB/s): min=15952, max=24056, per=20.82%, avg=20004.00, stdev=5730.39, samples=2 00:10:56.906 iops : min= 3988, max= 6014, avg=5001.00, stdev=1432.60, samples=2 00:10:56.906 lat (usec) : 1000=0.06% 00:10:56.906 lat (msec) : 2=0.85%, 4=5.02%, 10=57.34%, 20=17.65%, 50=17.61% 00:10:56.906 lat (msec) : 100=1.46% 00:10:56.906 cpu : usr=2.98%, sys=3.97%, ctx=434, majf=0, minf=1 00:10:56.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:56.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:56.906 issued rwts: total=4617,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.906 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:56.906 job2: (groupid=0, jobs=1): err= 0: pid=1143403: Wed Nov 20 16:05:32 2024 00:10:56.906 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:10:56.906 slat (nsec): min=932, max=12916k, avg=84178.68, stdev=565809.94 00:10:56.906 clat (usec): min=5141, max=32728, avg=10621.47, stdev=3556.84 00:10:56.906 lat (usec): min=5146, max=32772, avg=10705.65, stdev=3608.12 00:10:56.906 clat percentiles (usec): 00:10:56.906 | 1.00th=[ 6063], 5.00th=[ 7111], 10.00th=[ 7963], 20.00th=[ 8291], 00:10:56.906 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9896], 00:10:56.906 | 70.00th=[11207], 80.00th=[12518], 90.00th=[15533], 95.00th=[18220], 00:10:56.906 | 99.00th=[23987], 99.50th=[27132], 99.90th=[27395], 99.95th=[27395], 00:10:56.906 | 99.99th=[32637] 00:10:56.906 write: IOPS=5919, BW=23.1MiB/s (24.2MB/s)(23.2MiB/1004msec); 0 zone resets 00:10:56.906 slat (nsec): min=1571, max=12504k, avg=83669.46, stdev=506470.73 00:10:56.906 clat (usec): min=3347, max=30863, avg=11274.97, stdev=5261.91 00:10:56.906 lat (usec): min=3791, max=30871, avg=11358.64, stdev=5309.77 00:10:56.906 clat percentiles (usec): 00:10:56.906 | 1.00th=[ 5014], 5.00th=[ 6849], 10.00th=[ 7832], 20.00th=[ 8029], 00:10:56.906 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:10:56.906 | 70.00th=[10945], 80.00th=[15795], 90.00th=[20055], 95.00th=[22414], 00:10:56.906 | 99.00th=[26608], 99.50th=[27919], 99.90th=[30802], 99.95th=[30802], 00:10:56.906 | 99.99th=[30802] 00:10:56.906 bw ( KiB/s): min=16432, max=30096, per=24.21%, avg=23264.00, stdev=9661.91, samples=2 00:10:56.906 iops : min= 4108, max= 7524, avg=5816.00, stdev=2415.48, samples=2 00:10:56.906 lat (msec) : 4=0.08%, 10=65.56%, 20=27.93%, 50=6.43% 00:10:56.906 cpu : usr=3.79%, sys=5.88%, ctx=545, majf=0, minf=1 00:10:56.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:56.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:56.906 issued rwts: total=5632,5943,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.906 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:56.906 job3: (groupid=0, jobs=1): err= 0: pid=1143404: Wed Nov 20 16:05:32 2024 00:10:56.906 read: IOPS=5811, BW=22.7MiB/s (23.8MB/s)(22.9MiB/1008msec) 00:10:56.906 slat (nsec): min=925, max=17369k, avg=77891.84, stdev=622974.66 00:10:56.906 clat (usec): min=2402, max=40956, avg=10635.64, stdev=4993.57 00:10:56.906 lat (usec): min=2839, max=40980, avg=10713.54, stdev=5042.48 00:10:56.906 clat percentiles (usec): 00:10:56.906 | 1.00th=[ 4686], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 7570], 00:10:56.906 | 30.00th=[ 8029], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[10028], 00:10:56.906 | 70.00th=[10552], 80.00th=[12256], 90.00th=[17171], 95.00th=[21627], 00:10:56.906 | 99.00th=[33424], 99.50th=[33424], 99.90th=[36439], 99.95th=[36439], 00:10:56.906 | 99.99th=[41157] 00:10:56.906 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:10:56.906 slat (nsec): min=1613, max=8781.6k, avg=81039.41, stdev=512707.90 00:10:56.906 clat (usec): min=806, max=68152, avg=10666.17, stdev=9008.05 00:10:56.906 lat (usec): min=1029, max=68161, avg=10747.21, stdev=9072.03 00:10:56.906 clat percentiles (usec): 00:10:56.906 | 1.00th=[ 4228], 5.00th=[ 5014], 10.00th=[ 5538], 20.00th=[ 6259], 00:10:56.906 | 30.00th=[ 7504], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 9241], 00:10:56.906 | 70.00th=[ 9634], 80.00th=[11863], 90.00th=[15008], 95.00th=[18220], 00:10:56.906 | 99.00th=[61604], 99.50th=[64750], 99.90th=[67634], 99.95th=[67634], 00:10:56.906 | 99.99th=[67634] 00:10:56.906 bw ( KiB/s): min=24392, max=24760, per=25.58%, avg=24576.00, stdev=260.22, samples=2 00:10:56.906 iops : min= 6098, max= 6190, avg=6144.00, stdev=65.05, samples=2 00:10:56.906 lat (usec) : 1000=0.01% 00:10:56.906 lat (msec) : 2=0.10%, 4=0.53%, 10=66.86%, 20=27.06%, 50=4.32% 00:10:56.906 lat (msec) : 100=1.12% 00:10:56.906 cpu : usr=3.38%, sys=7.15%, ctx=397, majf=0, minf=2 00:10:56.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:56.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:56.906 issued rwts: total=5858,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.906 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:56.906 00:10:56.906 Run status group 0 (all jobs): 00:10:56.906 READ: bw=88.2MiB/s (92.5MB/s), 17.9MiB/s-25.9MiB/s (18.8MB/s-27.2MB/s), io=88.9MiB (93.2MB), run=1004-1008msec 00:10:56.906 WRITE: bw=93.8MiB/s (98.4MB/s), 19.8MiB/s-27.3MiB/s (20.8MB/s-28.6MB/s), io=94.6MiB (99.2MB), run=1004-1008msec 00:10:56.906 00:10:56.906 Disk stats (read/write): 00:10:56.906 nvme0n1: ios=5428/5632, merge=0/0, ticks=28788/30162, in_queue=58950, util=87.37% 00:10:56.906 nvme0n2: ios=4168/4608, merge=0/0, ticks=28825/28834, in_queue=57659, util=92.35% 00:10:56.906 nvme0n3: ios=4501/4608, merge=0/0, ticks=24569/25728, in_queue=50297, util=88.38% 00:10:56.906 nvme0n4: ios=4650/5111, merge=0/0, ticks=33077/36649, in_queue=69726, util=94.98% 00:10:56.907 16:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:56.907 [global] 00:10:56.907 thread=1 00:10:56.907 invalidate=1 00:10:56.907 rw=randwrite 00:10:56.907 time_based=1 00:10:56.907 runtime=1 00:10:56.907 ioengine=libaio 00:10:56.907 direct=1 00:10:56.907 bs=4096 00:10:56.907 iodepth=128 00:10:56.907 norandommap=0 00:10:56.907 numjobs=1 00:10:56.907 00:10:56.907 verify_dump=1 00:10:56.907 verify_backlog=512 00:10:56.907 verify_state_save=0 00:10:56.907 do_verify=1 00:10:56.907 verify=crc32c-intel 00:10:56.907 [job0] 00:10:56.907 filename=/dev/nvme0n1 00:10:56.907 [job1] 00:10:56.907 filename=/dev/nvme0n2 00:10:56.907 [job2] 00:10:56.907 filename=/dev/nvme0n3 00:10:56.907 [job3] 00:10:56.907 filename=/dev/nvme0n4 00:10:56.907 Could not set queue depth (nvme0n1) 00:10:56.907 Could not set queue depth (nvme0n2) 00:10:56.907 Could not set queue depth (nvme0n3) 00:10:56.907 Could not set queue depth (nvme0n4) 00:10:57.166 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.167 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.167 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.167 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.167 fio-3.35 00:10:57.167 Starting 4 threads 00:10:58.583 00:10:58.583 job0: (groupid=0, jobs=1): err= 0: pid=1143928: Wed Nov 20 16:05:34 2024 00:10:58.583 read: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec) 00:10:58.583 slat (nsec): min=867, max=12260k, avg=71731.66, stdev=507993.46 00:10:58.583 clat (usec): min=2685, max=28820, avg=9017.20, stdev=2697.19 00:10:58.583 lat (usec): min=2691, max=28828, avg=9088.93, stdev=2737.62 00:10:58.584 clat percentiles (usec): 00:10:58.584 | 1.00th=[ 5735], 5.00th=[ 6652], 10.00th=[ 6915], 20.00th=[ 7373], 00:10:58.584 | 30.00th=[ 8094], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8848], 00:10:58.584 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[11207], 95.00th=[13829], 00:10:58.584 | 99.00th=[19530], 99.50th=[23725], 99.90th=[28705], 99.95th=[28705], 00:10:58.584 | 99.99th=[28705] 00:10:58.584 write: IOPS=7216, BW=28.2MiB/s (29.6MB/s)(28.3MiB/1003msec); 0 zone resets 00:10:58.584 slat (nsec): min=1488, max=6260.7k, avg=63154.38, stdev=289704.00 00:10:58.584 clat (usec): min=1127, max=25640, avg=8657.08, stdev=3147.50 00:10:58.584 lat (usec): min=1136, max=25649, avg=8720.23, stdev=3169.27 00:10:58.584 clat percentiles (usec): 00:10:58.584 | 1.00th=[ 2409], 5.00th=[ 4883], 10.00th=[ 6587], 20.00th=[ 7242], 00:10:58.584 | 30.00th=[ 7504], 40.00th=[ 7898], 50.00th=[ 8160], 60.00th=[ 8291], 00:10:58.584 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[11863], 95.00th=[16057], 00:10:58.584 | 99.00th=[20579], 99.50th=[21627], 99.90th=[23725], 99.95th=[24511], 00:10:58.584 | 99.99th=[25560] 00:10:58.584 bw ( KiB/s): min=28672, max=28672, per=27.33%, avg=28672.00, stdev= 0.00, samples=2 00:10:58.584 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=2 00:10:58.584 lat (msec) : 2=0.27%, 4=1.56%, 10=83.40%, 20=13.65%, 50=1.11% 00:10:58.584 cpu : usr=3.29%, sys=6.09%, ctx=954, majf=0, minf=1 00:10:58.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:58.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.584 issued rwts: total=7168,7238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.584 job1: (groupid=0, jobs=1): err= 0: pid=1143929: Wed Nov 20 16:05:34 2024 00:10:58.584 read: IOPS=7619, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1008msec) 00:10:58.584 slat (nsec): min=934, max=8716.6k, avg=70162.51, stdev=541653.20 00:10:58.584 clat (usec): min=2547, max=21008, avg=8812.40, stdev=2209.50 00:10:58.584 lat (usec): min=2552, max=21033, avg=8882.56, stdev=2251.15 00:10:58.584 clat percentiles (usec): 00:10:58.584 | 1.00th=[ 3654], 5.00th=[ 6521], 10.00th=[ 6783], 20.00th=[ 7177], 00:10:58.584 | 30.00th=[ 7439], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 8586], 00:10:58.584 | 70.00th=[ 9372], 80.00th=[10421], 90.00th=[12125], 95.00th=[13304], 00:10:58.584 | 99.00th=[15270], 99.50th=[16909], 99.90th=[16909], 99.95th=[17171], 00:10:58.584 | 99.99th=[21103] 00:10:58.584 write: IOPS=7823, BW=30.6MiB/s (32.0MB/s)(30.8MiB/1008msec); 0 zone resets 00:10:58.584 slat (nsec): min=1549, max=6765.9k, avg=53713.37, stdev=305180.35 00:10:58.584 clat (usec): min=1598, max=25557, avg=7632.05, stdev=2543.78 00:10:58.584 lat (usec): min=1607, max=25564, avg=7685.76, stdev=2562.84 00:10:58.584 clat percentiles (usec): 00:10:58.584 | 1.00th=[ 2442], 5.00th=[ 3916], 10.00th=[ 4752], 20.00th=[ 6587], 00:10:58.584 | 30.00th=[ 6980], 40.00th=[ 7308], 50.00th=[ 7767], 60.00th=[ 8029], 00:10:58.584 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 9634], 00:10:58.584 | 99.00th=[18482], 99.50th=[25035], 99.90th=[25560], 99.95th=[25560], 00:10:58.584 | 99.99th=[25560] 00:10:58.584 bw ( KiB/s): min=28296, max=33776, per=29.58%, avg=31036.00, stdev=3874.95, samples=2 00:10:58.584 iops : min= 7074, max= 8444, avg=7759.00, stdev=968.74, samples=2 00:10:58.584 lat (msec) : 2=0.22%, 4=2.94%, 10=82.60%, 20=13.74%, 50=0.51% 00:10:58.584 cpu : usr=5.16%, sys=7.15%, ctx=844, majf=0, minf=1 00:10:58.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:58.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.584 issued rwts: total=7680,7886,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.584 job2: (groupid=0, jobs=1): err= 0: pid=1143930: Wed Nov 20 16:05:34 2024 00:10:58.584 read: IOPS=5967, BW=23.3MiB/s (24.4MB/s)(23.4MiB/1005msec) 00:10:58.584 slat (nsec): min=919, max=14078k, avg=86279.24, stdev=574202.57 00:10:58.584 clat (usec): min=978, max=29061, avg=10736.33, stdev=3129.25 00:10:58.584 lat (usec): min=3231, max=29067, avg=10822.60, stdev=3161.26 00:10:58.584 clat percentiles (usec): 00:10:58.584 | 1.00th=[ 4293], 5.00th=[ 7046], 10.00th=[ 8455], 20.00th=[ 9110], 00:10:58.584 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:10:58.584 | 70.00th=[10421], 80.00th=[11338], 90.00th=[15664], 95.00th=[17171], 00:10:58.584 | 99.00th=[20317], 99.50th=[25822], 99.90th=[28967], 99.95th=[28967], 00:10:58.584 | 99.99th=[28967] 00:10:58.584 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:10:58.584 slat (nsec): min=1531, max=11766k, avg=74388.40, stdev=456729.87 00:10:58.584 clat (usec): min=1208, max=25721, avg=10256.73, stdev=3027.52 00:10:58.584 lat (usec): min=1219, max=25731, avg=10331.11, stdev=3041.07 00:10:58.584 clat percentiles (usec): 00:10:58.584 | 1.00th=[ 5211], 5.00th=[ 7177], 10.00th=[ 7570], 20.00th=[ 8717], 00:10:58.584 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:10:58.584 | 70.00th=[10159], 80.00th=[11338], 90.00th=[12649], 95.00th=[17171], 00:10:58.584 | 99.00th=[24249], 99.50th=[24511], 99.90th=[24511], 99.95th=[24511], 00:10:58.584 | 99.99th=[25822] 00:10:58.584 bw ( KiB/s): min=24560, max=24592, per=23.42%, avg=24576.00, stdev=22.63, samples=2 00:10:58.584 iops : min= 6140, max= 6148, avg=6144.00, stdev= 5.66, samples=2 00:10:58.584 lat (usec) : 1000=0.01% 00:10:58.584 lat (msec) : 2=0.07%, 4=0.11%, 10=55.80%, 20=41.55%, 50=2.45% 00:10:58.584 cpu : usr=3.09%, sys=4.48%, ctx=689, majf=0, minf=1 00:10:58.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:58.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.584 issued rwts: total=5997,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.584 job3: (groupid=0, jobs=1): err= 0: pid=1143931: Wed Nov 20 16:05:34 2024 00:10:58.584 read: IOPS=5571, BW=21.8MiB/s (22.8MB/s)(22.7MiB/1045msec) 00:10:58.584 slat (nsec): min=912, max=11168k, avg=79496.47, stdev=650144.18 00:10:58.584 clat (usec): min=2502, max=52920, avg=11041.67, stdev=6502.80 00:10:58.584 lat (usec): min=2531, max=57759, avg=11121.17, stdev=6533.38 00:10:58.584 clat percentiles (usec): 00:10:58.584 | 1.00th=[ 3785], 5.00th=[ 6980], 10.00th=[ 7439], 20.00th=[ 8356], 00:10:58.584 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9896], 00:10:58.584 | 70.00th=[10683], 80.00th=[11731], 90.00th=[14746], 95.00th=[16188], 00:10:58.584 | 99.00th=[52167], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:10:58.584 | 99.99th=[52691] 00:10:58.584 write: IOPS=5879, BW=23.0MiB/s (24.1MB/s)(24.0MiB/1045msec); 0 zone resets 00:10:58.584 slat (nsec): min=1616, max=13048k, avg=77915.63, stdev=596769.31 00:10:58.584 clat (usec): min=1196, max=70682, avg=11122.97, stdev=9264.23 00:10:58.584 lat (usec): min=1206, max=70691, avg=11200.89, stdev=9328.11 00:10:58.584 clat percentiles (usec): 00:10:58.584 | 1.00th=[ 2966], 5.00th=[ 5080], 10.00th=[ 5604], 20.00th=[ 6783], 00:10:58.584 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:10:58.584 | 70.00th=[10159], 80.00th=[12780], 90.00th=[15664], 95.00th=[22676], 00:10:58.584 | 99.00th=[63177], 99.50th=[68682], 99.90th=[70779], 99.95th=[70779], 00:10:58.584 | 99.99th=[70779] 00:10:58.584 bw ( KiB/s): min=22072, max=27080, per=23.42%, avg=24576.00, stdev=3541.19, samples=2 00:10:58.584 iops : min= 5518, max= 6770, avg=6144.00, stdev=885.30, samples=2 00:10:58.584 lat (msec) : 2=0.12%, 4=1.86%, 10=63.98%, 20=28.72%, 50=3.73% 00:10:58.584 lat (msec) : 100=1.59% 00:10:58.584 cpu : usr=4.12%, sys=6.13%, ctx=376, majf=0, minf=1 00:10:58.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:58.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.584 issued rwts: total=5822,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.584 00:10:58.584 Run status group 0 (all jobs): 00:10:58.584 READ: bw=99.7MiB/s (105MB/s), 21.8MiB/s-29.8MiB/s (22.8MB/s-31.2MB/s), io=104MiB (109MB), run=1003-1045msec 00:10:58.584 WRITE: bw=102MiB/s (107MB/s), 23.0MiB/s-30.6MiB/s (24.1MB/s-32.0MB/s), io=107MiB (112MB), run=1003-1045msec 00:10:58.584 00:10:58.584 Disk stats (read/write): 00:10:58.584 nvme0n1: ios=5807/6144, merge=0/0, ticks=33771/33173, in_queue=66944, util=87.88% 00:10:58.584 nvme0n2: ios=6693/6671, merge=0/0, ticks=53909/46164, in_queue=100073, util=92.46% 00:10:58.584 nvme0n3: ios=4776/5120, merge=0/0, ticks=28650/27522, in_queue=56172, util=87.66% 00:10:58.584 nvme0n4: ios=4709/5120, merge=0/0, ticks=45419/53021, in_queue=98440, util=91.56% 00:10:58.584 16:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:58.584 16:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1144259 00:10:58.584 16:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:58.584 16:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:58.584 [global] 00:10:58.584 thread=1 00:10:58.584 invalidate=1 00:10:58.584 rw=read 00:10:58.584 time_based=1 00:10:58.584 runtime=10 00:10:58.584 ioengine=libaio 00:10:58.584 direct=1 00:10:58.584 bs=4096 00:10:58.584 iodepth=1 00:10:58.584 norandommap=1 00:10:58.584 numjobs=1 00:10:58.584 00:10:58.584 [job0] 00:10:58.584 filename=/dev/nvme0n1 00:10:58.584 [job1] 00:10:58.584 filename=/dev/nvme0n2 00:10:58.584 [job2] 00:10:58.584 filename=/dev/nvme0n3 00:10:58.584 [job3] 00:10:58.584 filename=/dev/nvme0n4 00:10:58.584 Could not set queue depth (nvme0n1) 00:10:58.584 Could not set queue depth (nvme0n2) 00:10:58.584 Could not set queue depth (nvme0n3) 00:10:58.584 Could not set queue depth (nvme0n4) 00:10:58.853 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.853 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.853 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.853 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.853 fio-3.35 00:10:58.853 Starting 4 threads 00:11:01.391 16:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:01.391 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=13320192, buflen=4096 00:11:01.391 fio: pid=1144453, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:01.650 16:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:01.650 16:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:01.651 16:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:01.651 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=344064, buflen=4096 00:11:01.651 fio: pid=1144452, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:01.910 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11747328, buflen=4096 00:11:01.910 fio: pid=1144449, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:01.910 16:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:01.910 16:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:02.171 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=16916480, buflen=4096 00:11:02.171 fio: pid=1144450, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:02.171 16:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:02.171 16:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:02.171 00:11:02.171 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1144449: Wed Nov 20 16:05:37 2024 00:11:02.171 read: IOPS=972, BW=3887KiB/s (3981kB/s)(11.2MiB/2951msec) 00:11:02.171 slat (usec): min=6, max=11034, avg=39.39, stdev=351.37 00:11:02.171 clat (usec): min=454, max=9019, avg=975.61, stdev=200.86 00:11:02.171 lat (usec): min=462, max=12064, avg=1015.01, stdev=404.26 00:11:02.171 clat percentiles (usec): 00:11:02.171 | 1.00th=[ 709], 5.00th=[ 816], 10.00th=[ 848], 20.00th=[ 898], 00:11:02.171 | 30.00th=[ 930], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 1004], 00:11:02.171 | 70.00th=[ 1020], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1106], 00:11:02.171 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 2507], 99.95th=[ 5932], 00:11:02.171 | 99.99th=[ 8979] 00:11:02.171 bw ( KiB/s): min= 3944, max= 4048, per=30.41%, avg=4000.00, stdev=37.95, samples=5 00:11:02.171 iops : min= 986, max= 1012, avg=1000.00, stdev= 9.49, samples=5 00:11:02.171 lat (usec) : 500=0.03%, 750=1.88%, 1000=57.37% 00:11:02.171 lat (msec) : 2=40.57%, 4=0.03%, 10=0.07% 00:11:02.171 cpu : usr=1.53%, sys=4.17%, ctx=2874, majf=0, minf=1 00:11:02.171 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.171 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.171 issued rwts: total=2869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.171 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.171 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1144450: Wed Nov 20 16:05:37 2024 00:11:02.171 read: IOPS=1314, BW=5256KiB/s (5382kB/s)(16.1MiB/3143msec) 00:11:02.171 slat (usec): min=5, max=36745, avg=43.24, stdev=660.30 00:11:02.171 clat (usec): min=267, max=1326, avg=706.83, stdev=107.38 00:11:02.171 lat (usec): min=292, max=37294, avg=750.07, stdev=667.45 00:11:02.171 clat percentiles (usec): 00:11:02.171 | 1.00th=[ 392], 5.00th=[ 523], 10.00th=[ 570], 20.00th=[ 627], 00:11:02.171 | 30.00th=[ 660], 40.00th=[ 693], 50.00th=[ 717], 60.00th=[ 742], 00:11:02.171 | 70.00th=[ 766], 80.00th=[ 791], 90.00th=[ 832], 95.00th=[ 865], 00:11:02.171 | 99.00th=[ 930], 99.50th=[ 955], 99.90th=[ 988], 99.95th=[ 1057], 00:11:02.171 | 99.99th=[ 1319] 00:11:02.171 bw ( KiB/s): min= 4726, max= 5480, per=40.41%, avg=5314.33, stdev=290.48, samples=6 00:11:02.171 iops : min= 1181, max= 1370, avg=1328.50, stdev=72.82, samples=6 00:11:02.171 lat (usec) : 500=3.70%, 750=59.57%, 1000=36.60% 00:11:02.171 lat (msec) : 2=0.10% 00:11:02.171 cpu : usr=1.27%, sys=3.69%, ctx=4137, majf=0, minf=2 00:11:02.171 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.171 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.171 issued rwts: total=4131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.171 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.171 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1144452: Wed Nov 20 16:05:37 2024 00:11:02.171 read: IOPS=30, BW=120KiB/s (123kB/s)(336KiB/2797msec) 00:11:02.171 slat (nsec): min=7328, max=39072, avg=26210.36, stdev=3109.24 00:11:02.171 clat (usec): min=522, max=42251, avg=33009.81, stdev=16691.90 00:11:02.171 lat (usec): min=547, max=42277, avg=33036.02, stdev=16692.21 00:11:02.171 clat percentiles (usec): 00:11:02.171 | 1.00th=[ 523], 5.00th=[ 709], 10.00th=[ 799], 20.00th=[ 1012], 00:11:02.171 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:11:02.171 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:02.171 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:02.171 | 99.99th=[42206] 00:11:02.171 bw ( KiB/s): min= 96, max= 224, per=0.94%, avg=123.20, stdev=56.46, samples=5 00:11:02.171 iops : min= 24, max= 56, avg=30.80, stdev=14.11, samples=5 00:11:02.171 lat (usec) : 750=8.24%, 1000=10.59% 00:11:02.171 lat (msec) : 2=1.18%, 20=1.18%, 50=77.65% 00:11:02.171 cpu : usr=0.14%, sys=0.00%, ctx=85, majf=0, minf=2 00:11:02.171 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.171 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.171 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.171 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.171 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1144453: Wed Nov 20 16:05:37 2024 00:11:02.171 read: IOPS=1266, BW=5065KiB/s (5187kB/s)(12.7MiB/2568msec) 00:11:02.171 slat (nsec): min=6917, max=60587, avg=23569.93, stdev=7835.87 00:11:02.171 clat (usec): min=368, max=1074, avg=757.89, stdev=84.07 00:11:02.171 lat (usec): min=394, max=1101, avg=781.46, stdev=85.56 00:11:02.171 clat percentiles (usec): 00:11:02.171 | 1.00th=[ 502], 5.00th=[ 603], 10.00th=[ 652], 20.00th=[ 693], 00:11:02.171 | 30.00th=[ 725], 40.00th=[ 750], 50.00th=[ 766], 60.00th=[ 791], 00:11:02.171 | 70.00th=[ 807], 80.00th=[ 824], 90.00th=[ 857], 95.00th=[ 873], 00:11:02.171 | 99.00th=[ 914], 99.50th=[ 930], 99.90th=[ 971], 99.95th=[ 988], 00:11:02.171 | 99.99th=[ 1074] 00:11:02.171 bw ( KiB/s): min= 5032, max= 5168, per=38.72%, avg=5092.80, stdev=49.51, samples=5 00:11:02.171 iops : min= 1258, max= 1292, avg=1273.20, stdev=12.38, samples=5 00:11:02.171 lat (usec) : 500=0.95%, 750=39.90%, 1000=59.08% 00:11:02.171 lat (msec) : 2=0.03% 00:11:02.171 cpu : usr=1.29%, sys=3.39%, ctx=3253, majf=0, minf=2 00:11:02.171 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.171 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.171 issued rwts: total=3253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.171 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.171 00:11:02.171 Run status group 0 (all jobs): 00:11:02.171 READ: bw=12.8MiB/s (13.5MB/s), 120KiB/s-5256KiB/s (123kB/s-5382kB/s), io=40.4MiB (42.3MB), run=2568-3143msec 00:11:02.171 00:11:02.171 Disk stats (read/write): 00:11:02.171 nvme0n1: ios=2810/0, merge=0/0, ticks=2629/0, in_queue=2629, util=94.36% 00:11:02.171 nvme0n2: ios=4085/0, merge=0/0, ticks=2824/0, in_queue=2824, util=93.28% 00:11:02.171 nvme0n3: ios=79/0, merge=0/0, ticks=2569/0, in_queue=2569, util=96.03% 00:11:02.171 nvme0n4: ios=2979/0, merge=0/0, ticks=2186/0, in_queue=2186, util=96.10% 00:11:02.171 16:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:02.171 16:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:02.431 16:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:02.431 16:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:02.690 16:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:02.690 16:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:02.951 16:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:02.951 16:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:02.951 16:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:02.951 16:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1144259 00:11:02.951 16:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:02.951 16:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:03.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.211 16:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:03.211 16:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:03.211 16:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:03.211 16:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:03.211 16:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:03.211 16:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:03.211 16:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:03.211 16:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:03.211 16:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:03.211 nvmf hotplug test: fio failed as expected 00:11:03.211 16:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:03.211 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:03.211 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:03.211 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:03.211 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:03.211 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:03.211 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:03.211 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:03.211 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:03.211 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:03.211 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:03.211 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:03.211 rmmod nvme_tcp 00:11:03.489 rmmod nvme_fabrics 00:11:03.489 rmmod nvme_keyring 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1140615 ']' 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1140615 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1140615 ']' 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1140615 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1140615 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1140615' 00:11:03.489 killing process with pid 1140615 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1140615 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1140615 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.489 16:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:06.110 00:11:06.110 real 0m29.467s 00:11:06.110 user 2m43.692s 00:11:06.110 sys 0m9.887s 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.110 ************************************ 00:11:06.110 END TEST nvmf_fio_target 00:11:06.110 ************************************ 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:06.110 ************************************ 00:11:06.110 START TEST nvmf_bdevio 00:11:06.110 ************************************ 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:06.110 * Looking for test storage... 00:11:06.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:06.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.110 --rc genhtml_branch_coverage=1 00:11:06.110 --rc genhtml_function_coverage=1 00:11:06.110 --rc genhtml_legend=1 00:11:06.110 --rc geninfo_all_blocks=1 00:11:06.110 --rc geninfo_unexecuted_blocks=1 00:11:06.110 00:11:06.110 ' 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:06.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.110 --rc genhtml_branch_coverage=1 00:11:06.110 --rc genhtml_function_coverage=1 00:11:06.110 --rc genhtml_legend=1 00:11:06.110 --rc geninfo_all_blocks=1 00:11:06.110 --rc geninfo_unexecuted_blocks=1 00:11:06.110 00:11:06.110 ' 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:06.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.110 --rc genhtml_branch_coverage=1 00:11:06.110 --rc genhtml_function_coverage=1 00:11:06.110 --rc genhtml_legend=1 00:11:06.110 --rc geninfo_all_blocks=1 00:11:06.110 --rc geninfo_unexecuted_blocks=1 00:11:06.110 00:11:06.110 ' 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:06.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.110 --rc genhtml_branch_coverage=1 00:11:06.110 --rc genhtml_function_coverage=1 00:11:06.110 --rc genhtml_legend=1 00:11:06.110 --rc geninfo_all_blocks=1 00:11:06.110 --rc geninfo_unexecuted_blocks=1 00:11:06.110 00:11:06.110 ' 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.110 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:06.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:06.111 16:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:14.254 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:14.255 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:14.255 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:14.255 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:14.255 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:14.255 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:14.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:11:14.255 00:11:14.255 --- 10.0.0.2 ping statistics --- 00:11:14.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.255 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:14.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:11:14.255 00:11:14.255 --- 10.0.0.1 ping statistics --- 00:11:14.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.255 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1149566 00:11:14.255 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1149566 00:11:14.256 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:14.256 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1149566 ']' 00:11:14.256 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.256 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.256 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.256 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.256 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.256 [2024-11-20 16:05:49.336186] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:11:14.256 [2024-11-20 16:05:49.336256] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.256 [2024-11-20 16:05:49.438531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:14.256 [2024-11-20 16:05:49.491838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.256 [2024-11-20 16:05:49.491897] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.256 [2024-11-20 16:05:49.491907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.256 [2024-11-20 16:05:49.491914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.256 [2024-11-20 16:05:49.491920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.256 [2024-11-20 16:05:49.494008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:14.256 [2024-11-20 16:05:49.494194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:14.256 [2024-11-20 16:05:49.494304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:14.256 [2024-11-20 16:05:49.494484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.256 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.256 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:14.256 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:14.256 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:14.256 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.514 [2024-11-20 16:05:50.227145] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.514 Malloc0 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.514 [2024-11-20 16:05:50.303102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:14.514 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:14.515 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:14.515 { 00:11:14.515 "params": { 00:11:14.515 "name": "Nvme$subsystem", 00:11:14.515 "trtype": "$TEST_TRANSPORT", 00:11:14.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:14.515 "adrfam": "ipv4", 00:11:14.515 "trsvcid": "$NVMF_PORT", 00:11:14.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:14.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:14.515 "hdgst": ${hdgst:-false}, 00:11:14.515 "ddgst": ${ddgst:-false} 00:11:14.515 }, 00:11:14.515 "method": "bdev_nvme_attach_controller" 00:11:14.515 } 00:11:14.515 EOF 00:11:14.515 )") 00:11:14.515 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:14.515 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:14.515 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:14.515 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:14.515 "params": { 00:11:14.515 "name": "Nvme1", 00:11:14.515 "trtype": "tcp", 00:11:14.515 "traddr": "10.0.0.2", 00:11:14.515 "adrfam": "ipv4", 00:11:14.515 "trsvcid": "4420", 00:11:14.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:14.515 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:14.515 "hdgst": false, 00:11:14.515 "ddgst": false 00:11:14.515 }, 00:11:14.515 "method": "bdev_nvme_attach_controller" 00:11:14.515 }' 00:11:14.515 [2024-11-20 16:05:50.361241] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:11:14.515 [2024-11-20 16:05:50.361309] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1149853 ] 00:11:14.772 [2024-11-20 16:05:50.453850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:14.772 [2024-11-20 16:05:50.510251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.772 [2024-11-20 16:05:50.510392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.772 [2024-11-20 16:05:50.510394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.030 I/O targets: 00:11:15.030 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:15.030 00:11:15.030 00:11:15.030 CUnit - A unit testing framework for C - Version 2.1-3 00:11:15.030 http://cunit.sourceforge.net/ 00:11:15.030 00:11:15.030 00:11:15.030 Suite: bdevio tests on: Nvme1n1 00:11:15.030 Test: blockdev write read block ...passed 00:11:15.030 Test: blockdev write zeroes read block ...passed 00:11:15.030 Test: blockdev write zeroes read no split ...passed 00:11:15.030 Test: blockdev write zeroes read split ...passed 00:11:15.030 Test: blockdev write zeroes read split partial ...passed 00:11:15.030 Test: blockdev reset ...[2024-11-20 16:05:50.852347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:15.030 [2024-11-20 16:05:50.852443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f1970 (9): Bad file descriptor 00:11:15.030 [2024-11-20 16:05:50.883251] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:15.030 passed 00:11:15.030 Test: blockdev write read 8 blocks ...passed 00:11:15.030 Test: blockdev write read size > 128k ...passed 00:11:15.030 Test: blockdev write read invalid size ...passed 00:11:15.030 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.030 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.030 Test: blockdev write read max offset ...passed 00:11:15.288 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.288 Test: blockdev writev readv 8 blocks ...passed 00:11:15.288 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.288 Test: blockdev writev readv block ...passed 00:11:15.288 Test: blockdev writev readv size > 128k ...passed 00:11:15.288 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.288 Test: blockdev comparev and writev ...[2024-11-20 16:05:51.067857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.288 [2024-11-20 16:05:51.067908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:15.288 [2024-11-20 16:05:51.067925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.288 [2024-11-20 16:05:51.067934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:15.288 [2024-11-20 16:05:51.068475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.288 [2024-11-20 16:05:51.068488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:15.288 [2024-11-20 16:05:51.068503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.288 [2024-11-20 16:05:51.068511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:15.288 [2024-11-20 16:05:51.069050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.288 [2024-11-20 16:05:51.069062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:15.288 [2024-11-20 16:05:51.069076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.288 [2024-11-20 16:05:51.069084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:15.288 [2024-11-20 16:05:51.069660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.288 [2024-11-20 16:05:51.069673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:15.288 [2024-11-20 16:05:51.069687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.288 [2024-11-20 16:05:51.069695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:15.288 passed 00:11:15.288 Test: blockdev nvme passthru rw ...passed 00:11:15.288 Test: blockdev nvme passthru vendor specific ...[2024-11-20 16:05:51.154081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:15.288 [2024-11-20 16:05:51.154098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:15.288 [2024-11-20 16:05:51.154466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:15.288 [2024-11-20 16:05:51.154478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:15.288 [2024-11-20 16:05:51.154884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:15.288 [2024-11-20 16:05:51.154895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:15.288 [2024-11-20 16:05:51.155176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:15.288 [2024-11-20 16:05:51.155194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:15.288 passed 00:11:15.288 Test: blockdev nvme admin passthru ...passed 00:11:15.288 Test: blockdev copy ...passed 00:11:15.288 00:11:15.288 Run Summary: Type Total Ran Passed Failed Inactive 00:11:15.288 suites 1 1 n/a 0 0 00:11:15.288 tests 23 23 23 0 0 00:11:15.288 asserts 152 152 152 0 n/a 00:11:15.288 00:11:15.288 Elapsed time = 1.005 seconds 00:11:15.547 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.547 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.547 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:15.547 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.547 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:15.547 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:15.547 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:15.547 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:15.547 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:15.547 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:15.547 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:15.547 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:15.547 rmmod nvme_tcp 00:11:15.547 rmmod nvme_fabrics 00:11:15.547 rmmod nvme_keyring 00:11:15.547 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:15.547 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:15.547 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:15.547 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1149566 ']' 00:11:15.547 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1149566 00:11:15.547 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1149566 ']' 00:11:15.547 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1149566 00:11:15.547 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:15.547 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.547 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1149566 00:11:15.807 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:15.807 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:15.807 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1149566' 00:11:15.807 killing process with pid 1149566 00:11:15.807 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1149566 00:11:15.807 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1149566 00:11:15.807 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:15.807 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:15.807 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:15.807 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:15.807 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:15.807 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:15.807 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:15.807 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:15.807 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:15.807 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.807 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.807 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.356 16:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:18.356 00:11:18.356 real 0m12.210s 00:11:18.356 user 0m12.844s 00:11:18.356 sys 0m6.278s 00:11:18.356 16:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.356 16:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.356 ************************************ 00:11:18.356 END TEST nvmf_bdevio 00:11:18.356 ************************************ 00:11:18.356 16:05:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:18.356 00:11:18.356 real 5m3.153s 00:11:18.356 user 11m51.867s 00:11:18.356 sys 1m51.915s 00:11:18.356 16:05:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.356 16:05:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:18.356 ************************************ 00:11:18.356 END TEST nvmf_target_core 00:11:18.356 ************************************ 00:11:18.356 16:05:53 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:18.356 16:05:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:18.356 16:05:53 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.356 16:05:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:18.356 ************************************ 00:11:18.356 START TEST nvmf_target_extra 00:11:18.356 ************************************ 00:11:18.356 16:05:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:18.356 * Looking for test storage... 00:11:18.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:18.357 16:05:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:18.357 16:05:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:18.357 16:05:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:18.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.357 --rc genhtml_branch_coverage=1 00:11:18.357 --rc genhtml_function_coverage=1 00:11:18.357 --rc genhtml_legend=1 00:11:18.357 --rc geninfo_all_blocks=1 00:11:18.357 --rc geninfo_unexecuted_blocks=1 00:11:18.357 00:11:18.357 ' 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:18.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.357 --rc genhtml_branch_coverage=1 00:11:18.357 --rc genhtml_function_coverage=1 00:11:18.357 --rc genhtml_legend=1 00:11:18.357 --rc geninfo_all_blocks=1 00:11:18.357 --rc geninfo_unexecuted_blocks=1 00:11:18.357 00:11:18.357 ' 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:18.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.357 --rc genhtml_branch_coverage=1 00:11:18.357 --rc genhtml_function_coverage=1 00:11:18.357 --rc genhtml_legend=1 00:11:18.357 --rc geninfo_all_blocks=1 00:11:18.357 --rc geninfo_unexecuted_blocks=1 00:11:18.357 00:11:18.357 ' 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:18.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.357 --rc genhtml_branch_coverage=1 00:11:18.357 --rc genhtml_function_coverage=1 00:11:18.357 --rc genhtml_legend=1 00:11:18.357 --rc geninfo_all_blocks=1 00:11:18.357 --rc geninfo_unexecuted_blocks=1 00:11:18.357 00:11:18.357 ' 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:18.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:18.357 ************************************ 00:11:18.357 START TEST nvmf_example 00:11:18.357 ************************************ 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:18.357 * Looking for test storage... 00:11:18.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:18.357 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:18.358 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:18.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.620 --rc genhtml_branch_coverage=1 00:11:18.620 --rc genhtml_function_coverage=1 00:11:18.620 --rc genhtml_legend=1 00:11:18.620 --rc geninfo_all_blocks=1 00:11:18.620 --rc geninfo_unexecuted_blocks=1 00:11:18.620 00:11:18.620 ' 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:18.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.620 --rc genhtml_branch_coverage=1 00:11:18.620 --rc genhtml_function_coverage=1 00:11:18.620 --rc genhtml_legend=1 00:11:18.620 --rc geninfo_all_blocks=1 00:11:18.620 --rc geninfo_unexecuted_blocks=1 00:11:18.620 00:11:18.620 ' 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:18.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.620 --rc genhtml_branch_coverage=1 00:11:18.620 --rc genhtml_function_coverage=1 00:11:18.620 --rc genhtml_legend=1 00:11:18.620 --rc geninfo_all_blocks=1 00:11:18.620 --rc geninfo_unexecuted_blocks=1 00:11:18.620 00:11:18.620 ' 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:18.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.620 --rc genhtml_branch_coverage=1 00:11:18.620 --rc genhtml_function_coverage=1 00:11:18.620 --rc genhtml_legend=1 00:11:18.620 --rc geninfo_all_blocks=1 00:11:18.620 --rc geninfo_unexecuted_blocks=1 00:11:18.620 00:11:18.620 ' 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:18.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:18.620 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:18.621 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:18.621 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:18.621 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:18.621 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:18.621 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:18.621 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:18.621 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:18.621 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:18.621 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:18.621 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.621 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:18.621 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:18.621 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:18.621 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.621 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.621 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.621 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:18.621 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:18.621 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:18.621 16:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.771 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:26.772 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:26.772 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:26.772 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:26.772 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:26.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:11:26.772 00:11:26.772 --- 10.0.0.2 ping statistics --- 00:11:26.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.772 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:11:26.772 00:11:26.772 --- 10.0.0.1 ping statistics --- 00:11:26.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.772 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:26.772 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:26.773 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:26.773 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:26.773 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:26.773 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:26.773 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1154498 00:11:26.773 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:26.773 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:26.773 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1154498 00:11:26.773 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1154498 ']' 00:11:26.773 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.773 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.773 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.773 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.773 16:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:27.034 16:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:39.271 Initializing NVMe Controllers 00:11:39.271 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:39.271 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:39.271 Initialization complete. Launching workers. 00:11:39.271 ======================================================== 00:11:39.271 Latency(us) 00:11:39.271 Device Information : IOPS MiB/s Average min max 00:11:39.271 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18572.35 72.55 3445.63 627.41 16340.74 00:11:39.271 ======================================================== 00:11:39.271 Total : 18572.35 72.55 3445.63 627.41 16340.74 00:11:39.271 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:39.271 rmmod nvme_tcp 00:11:39.271 rmmod nvme_fabrics 00:11:39.271 rmmod nvme_keyring 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1154498 ']' 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1154498 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1154498 ']' 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1154498 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1154498 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1154498' 00:11:39.271 killing process with pid 1154498 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1154498 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1154498 00:11:39.271 nvmf threads initialize successfully 00:11:39.271 bdev subsystem init successfully 00:11:39.271 created a nvmf target service 00:11:39.271 create targets's poll groups done 00:11:39.271 all subsystems of target started 00:11:39.271 nvmf target is running 00:11:39.271 all subsystems of target stopped 00:11:39.271 destroy targets's poll groups done 00:11:39.271 destroyed the nvmf target service 00:11:39.271 bdev subsystem finish successfully 00:11:39.271 nvmf threads destroy successfully 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.271 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.531 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:39.531 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:39.531 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:39.531 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.793 00:11:39.793 real 0m21.355s 00:11:39.793 user 0m46.379s 00:11:39.793 sys 0m6.985s 00:11:39.793 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.793 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.793 ************************************ 00:11:39.793 END TEST nvmf_example 00:11:39.793 ************************************ 00:11:39.793 16:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:39.793 16:06:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:39.793 16:06:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.793 16:06:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:39.793 ************************************ 00:11:39.793 START TEST nvmf_filesystem 00:11:39.793 ************************************ 00:11:39.793 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:39.793 * Looking for test storage... 00:11:39.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:39.793 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:39.793 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:39.793 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.058 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:40.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.059 --rc genhtml_branch_coverage=1 00:11:40.059 --rc genhtml_function_coverage=1 00:11:40.059 --rc genhtml_legend=1 00:11:40.059 --rc geninfo_all_blocks=1 00:11:40.059 --rc geninfo_unexecuted_blocks=1 00:11:40.059 00:11:40.059 ' 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:40.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.059 --rc genhtml_branch_coverage=1 00:11:40.059 --rc genhtml_function_coverage=1 00:11:40.059 --rc genhtml_legend=1 00:11:40.059 --rc geninfo_all_blocks=1 00:11:40.059 --rc geninfo_unexecuted_blocks=1 00:11:40.059 00:11:40.059 ' 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:40.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.059 --rc genhtml_branch_coverage=1 00:11:40.059 --rc genhtml_function_coverage=1 00:11:40.059 --rc genhtml_legend=1 00:11:40.059 --rc geninfo_all_blocks=1 00:11:40.059 --rc geninfo_unexecuted_blocks=1 00:11:40.059 00:11:40.059 ' 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:40.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.059 --rc genhtml_branch_coverage=1 00:11:40.059 --rc genhtml_function_coverage=1 00:11:40.059 --rc genhtml_legend=1 00:11:40.059 --rc geninfo_all_blocks=1 00:11:40.059 --rc geninfo_unexecuted_blocks=1 00:11:40.059 00:11:40.059 ' 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:40.059 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:40.060 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:40.060 #define SPDK_CONFIG_H 00:11:40.060 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:40.060 #define SPDK_CONFIG_APPS 1 00:11:40.060 #define SPDK_CONFIG_ARCH native 00:11:40.060 #undef SPDK_CONFIG_ASAN 00:11:40.060 #undef SPDK_CONFIG_AVAHI 00:11:40.060 #undef SPDK_CONFIG_CET 00:11:40.060 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:40.060 #define SPDK_CONFIG_COVERAGE 1 00:11:40.060 #define SPDK_CONFIG_CROSS_PREFIX 00:11:40.060 #undef SPDK_CONFIG_CRYPTO 00:11:40.060 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:40.060 #undef SPDK_CONFIG_CUSTOMOCF 00:11:40.060 #undef SPDK_CONFIG_DAOS 00:11:40.060 #define SPDK_CONFIG_DAOS_DIR 00:11:40.060 #define SPDK_CONFIG_DEBUG 1 00:11:40.060 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:40.060 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:40.060 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:40.060 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:40.060 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:40.060 #undef SPDK_CONFIG_DPDK_UADK 00:11:40.060 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:40.060 #define SPDK_CONFIG_EXAMPLES 1 00:11:40.060 #undef SPDK_CONFIG_FC 00:11:40.060 #define SPDK_CONFIG_FC_PATH 00:11:40.060 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:40.060 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:40.060 #define SPDK_CONFIG_FSDEV 1 00:11:40.060 #undef SPDK_CONFIG_FUSE 00:11:40.060 #undef SPDK_CONFIG_FUZZER 00:11:40.060 #define SPDK_CONFIG_FUZZER_LIB 00:11:40.060 #undef SPDK_CONFIG_GOLANG 00:11:40.060 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:40.060 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:40.061 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:40.061 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:40.061 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:40.061 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:40.061 #undef SPDK_CONFIG_HAVE_LZ4 00:11:40.061 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:40.061 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:40.061 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:40.061 #define SPDK_CONFIG_IDXD 1 00:11:40.061 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:40.061 #undef SPDK_CONFIG_IPSEC_MB 00:11:40.061 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:40.061 #define SPDK_CONFIG_ISAL 1 00:11:40.061 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:40.061 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:40.061 #define SPDK_CONFIG_LIBDIR 00:11:40.061 #undef SPDK_CONFIG_LTO 00:11:40.061 #define SPDK_CONFIG_MAX_LCORES 128 00:11:40.061 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:40.061 #define SPDK_CONFIG_NVME_CUSE 1 00:11:40.061 #undef SPDK_CONFIG_OCF 00:11:40.061 #define SPDK_CONFIG_OCF_PATH 00:11:40.061 #define SPDK_CONFIG_OPENSSL_PATH 00:11:40.061 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:40.061 #define SPDK_CONFIG_PGO_DIR 00:11:40.061 #undef SPDK_CONFIG_PGO_USE 00:11:40.061 #define SPDK_CONFIG_PREFIX /usr/local 00:11:40.061 #undef SPDK_CONFIG_RAID5F 00:11:40.061 #undef SPDK_CONFIG_RBD 00:11:40.061 #define SPDK_CONFIG_RDMA 1 00:11:40.061 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:40.061 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:40.061 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:40.061 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:40.061 #define SPDK_CONFIG_SHARED 1 00:11:40.061 #undef SPDK_CONFIG_SMA 00:11:40.061 #define SPDK_CONFIG_TESTS 1 00:11:40.061 #undef SPDK_CONFIG_TSAN 00:11:40.061 #define SPDK_CONFIG_UBLK 1 00:11:40.061 #define SPDK_CONFIG_UBSAN 1 00:11:40.061 #undef SPDK_CONFIG_UNIT_TESTS 00:11:40.061 #undef SPDK_CONFIG_URING 00:11:40.061 #define SPDK_CONFIG_URING_PATH 00:11:40.061 #undef SPDK_CONFIG_URING_ZNS 00:11:40.061 #undef SPDK_CONFIG_USDT 00:11:40.061 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:40.061 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:40.061 #define SPDK_CONFIG_VFIO_USER 1 00:11:40.061 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:40.061 #define SPDK_CONFIG_VHOST 1 00:11:40.061 #define SPDK_CONFIG_VIRTIO 1 00:11:40.061 #undef SPDK_CONFIG_VTUNE 00:11:40.061 #define SPDK_CONFIG_VTUNE_DIR 00:11:40.061 #define SPDK_CONFIG_WERROR 1 00:11:40.061 #define SPDK_CONFIG_WPDK_DIR 00:11:40.061 #undef SPDK_CONFIG_XNVME 00:11:40.061 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:40.061 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:40.062 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:40.063 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1157695 ]] 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1157695 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.7RLdUp 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.7RLdUp/tests/target /tmp/spdk.7RLdUp 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=118319644672 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356509184 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11036864512 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.064 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666886144 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678252544 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847934976 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871302656 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23367680 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677687296 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678256640 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=569344 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935634944 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935647232 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:40.065 * Looking for test storage... 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=118319644672 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13251457024 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:40.065 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:40.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.328 --rc genhtml_branch_coverage=1 00:11:40.328 --rc genhtml_function_coverage=1 00:11:40.328 --rc genhtml_legend=1 00:11:40.328 --rc geninfo_all_blocks=1 00:11:40.328 --rc geninfo_unexecuted_blocks=1 00:11:40.328 00:11:40.328 ' 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:40.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.328 --rc genhtml_branch_coverage=1 00:11:40.328 --rc genhtml_function_coverage=1 00:11:40.328 --rc genhtml_legend=1 00:11:40.328 --rc geninfo_all_blocks=1 00:11:40.328 --rc geninfo_unexecuted_blocks=1 00:11:40.328 00:11:40.328 ' 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:40.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.328 --rc genhtml_branch_coverage=1 00:11:40.328 --rc genhtml_function_coverage=1 00:11:40.328 --rc genhtml_legend=1 00:11:40.328 --rc geninfo_all_blocks=1 00:11:40.328 --rc geninfo_unexecuted_blocks=1 00:11:40.328 00:11:40.328 ' 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:40.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.328 --rc genhtml_branch_coverage=1 00:11:40.328 --rc genhtml_function_coverage=1 00:11:40.328 --rc genhtml_legend=1 00:11:40.328 --rc geninfo_all_blocks=1 00:11:40.328 --rc geninfo_unexecuted_blocks=1 00:11:40.328 00:11:40.328 ' 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.328 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:40.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:40.329 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:48.484 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:48.484 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:48.484 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:48.485 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:48.485 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:48.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:48.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:11:48.485 00:11:48.485 --- 10.0.0.2 ping statistics --- 00:11:48.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.485 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:48.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:48.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:11:48.485 00:11:48.485 --- 10.0.0.1 ping statistics --- 00:11:48.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.485 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:48.485 ************************************ 00:11:48.485 START TEST nvmf_filesystem_no_in_capsule 00:11:48.485 ************************************ 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1161570 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1161570 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1161570 ']' 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.485 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.485 [2024-11-20 16:06:23.774521] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:11:48.485 [2024-11-20 16:06:23.774581] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.485 [2024-11-20 16:06:23.876454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:48.485 [2024-11-20 16:06:23.929236] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:48.485 [2024-11-20 16:06:23.929288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:48.485 [2024-11-20 16:06:23.929297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:48.485 [2024-11-20 16:06:23.929304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:48.485 [2024-11-20 16:06:23.929311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:48.485 [2024-11-20 16:06:23.931721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.485 [2024-11-20 16:06:23.931884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.485 [2024-11-20 16:06:23.932053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.485 [2024-11-20 16:06:23.932052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:48.747 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.747 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:48.747 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:48.747 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:48.747 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.747 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.747 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:48.747 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:48.747 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.747 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.747 [2024-11-20 16:06:24.659740] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.747 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.747 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:48.747 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.747 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.009 Malloc1 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.009 [2024-11-20 16:06:24.816425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.009 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:49.009 { 00:11:49.009 "name": "Malloc1", 00:11:49.009 "aliases": [ 00:11:49.010 "15557f9f-0ad6-4368-8bdd-4d2dba22e6ca" 00:11:49.010 ], 00:11:49.010 "product_name": "Malloc disk", 00:11:49.010 "block_size": 512, 00:11:49.010 "num_blocks": 1048576, 00:11:49.010 "uuid": "15557f9f-0ad6-4368-8bdd-4d2dba22e6ca", 00:11:49.010 "assigned_rate_limits": { 00:11:49.010 "rw_ios_per_sec": 0, 00:11:49.010 "rw_mbytes_per_sec": 0, 00:11:49.010 "r_mbytes_per_sec": 0, 00:11:49.010 "w_mbytes_per_sec": 0 00:11:49.010 }, 00:11:49.010 "claimed": true, 00:11:49.010 "claim_type": "exclusive_write", 00:11:49.010 "zoned": false, 00:11:49.010 "supported_io_types": { 00:11:49.010 "read": true, 00:11:49.010 "write": true, 00:11:49.010 "unmap": true, 00:11:49.010 "flush": true, 00:11:49.010 "reset": true, 00:11:49.010 "nvme_admin": false, 00:11:49.010 "nvme_io": false, 00:11:49.010 "nvme_io_md": false, 00:11:49.010 "write_zeroes": true, 00:11:49.010 "zcopy": true, 00:11:49.010 "get_zone_info": false, 00:11:49.010 "zone_management": false, 00:11:49.010 "zone_append": false, 00:11:49.010 "compare": false, 00:11:49.010 "compare_and_write": false, 00:11:49.010 "abort": true, 00:11:49.010 "seek_hole": false, 00:11:49.010 "seek_data": false, 00:11:49.010 "copy": true, 00:11:49.010 "nvme_iov_md": false 00:11:49.010 }, 00:11:49.010 "memory_domains": [ 00:11:49.010 { 00:11:49.010 "dma_device_id": "system", 00:11:49.010 "dma_device_type": 1 00:11:49.010 }, 00:11:49.010 { 00:11:49.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.010 "dma_device_type": 2 00:11:49.010 } 00:11:49.010 ], 00:11:49.010 "driver_specific": {} 00:11:49.010 } 00:11:49.010 ]' 00:11:49.010 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:49.010 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:49.010 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:49.010 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:49.010 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:49.010 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:49.271 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:49.271 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:50.660 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:50.660 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:50.660 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:50.660 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:50.660 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:53.207 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:53.207 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:53.207 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.207 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:53.207 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.207 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:53.207 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:53.207 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:53.207 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:53.207 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:53.208 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:53.208 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:53.208 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:53.208 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:53.208 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:53.208 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:53.208 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:53.208 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:53.468 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:54.853 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:54.853 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:54.853 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:54.853 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.853 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.853 ************************************ 00:11:54.853 START TEST filesystem_ext4 00:11:54.853 ************************************ 00:11:54.853 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:54.853 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:54.853 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:54.853 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:54.853 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:54.853 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:54.853 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:54.853 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:54.853 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:54.853 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:54.853 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:54.853 mke2fs 1.47.0 (5-Feb-2023) 00:11:54.853 Discarding device blocks: 0/522240 done 00:11:54.853 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:54.853 Filesystem UUID: 47e8e313-e5d6-41a3-8c84-f58ae63c4651 00:11:54.853 Superblock backups stored on blocks: 00:11:54.853 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:54.853 00:11:54.853 Allocating group tables: 0/64 done 00:11:54.853 Writing inode tables: 0/64 done 00:11:54.853 Creating journal (8192 blocks): done 00:11:57.179 Writing superblocks and filesystem accounting information: 0/64 done 00:11:57.179 00:11:57.179 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:57.179 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1161570 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:03.760 00:12:03.760 real 0m8.073s 00:12:03.760 user 0m0.020s 00:12:03.760 sys 0m0.094s 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:03.760 ************************************ 00:12:03.760 END TEST filesystem_ext4 00:12:03.760 ************************************ 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.760 ************************************ 00:12:03.760 START TEST filesystem_btrfs 00:12:03.760 ************************************ 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:03.760 btrfs-progs v6.8.1 00:12:03.760 See https://btrfs.readthedocs.io for more information. 00:12:03.760 00:12:03.760 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:03.760 NOTE: several default settings have changed in version 5.15, please make sure 00:12:03.760 this does not affect your deployments: 00:12:03.760 - DUP for metadata (-m dup) 00:12:03.760 - enabled no-holes (-O no-holes) 00:12:03.760 - enabled free-space-tree (-R free-space-tree) 00:12:03.760 00:12:03.760 Label: (null) 00:12:03.760 UUID: 82d06f78-9ca3-4816-8432-ee1e8fcf4a3c 00:12:03.760 Node size: 16384 00:12:03.760 Sector size: 4096 (CPU page size: 4096) 00:12:03.760 Filesystem size: 510.00MiB 00:12:03.760 Block group profiles: 00:12:03.760 Data: single 8.00MiB 00:12:03.760 Metadata: DUP 32.00MiB 00:12:03.760 System: DUP 8.00MiB 00:12:03.760 SSD detected: yes 00:12:03.760 Zoned device: no 00:12:03.760 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:03.760 Checksum: crc32c 00:12:03.760 Number of devices: 1 00:12:03.760 Devices: 00:12:03.760 ID SIZE PATH 00:12:03.760 1 510.00MiB /dev/nvme0n1p1 00:12:03.760 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:03.760 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:03.760 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:03.760 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:03.760 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:03.760 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:03.760 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:03.760 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:03.760 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1161570 00:12:03.761 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:03.761 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:03.761 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:03.761 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:03.761 00:12:03.761 real 0m0.858s 00:12:03.761 user 0m0.019s 00:12:03.761 sys 0m0.127s 00:12:03.761 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.761 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:03.761 ************************************ 00:12:03.761 END TEST filesystem_btrfs 00:12:03.761 ************************************ 00:12:03.761 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:03.761 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:03.761 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.761 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.761 ************************************ 00:12:03.761 START TEST filesystem_xfs 00:12:03.761 ************************************ 00:12:03.761 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:03.761 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:03.761 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:03.761 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:03.761 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:03.761 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:03.761 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:03.761 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:03.761 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:03.761 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:03.761 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:03.761 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:03.761 = sectsz=512 attr=2, projid32bit=1 00:12:03.761 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:03.761 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:03.761 data = bsize=4096 blocks=130560, imaxpct=25 00:12:03.761 = sunit=0 swidth=0 blks 00:12:03.761 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:03.761 log =internal log bsize=4096 blocks=16384, version=2 00:12:03.761 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:03.761 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:04.703 Discarding blocks...Done. 00:12:04.703 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:04.703 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:06.617 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:06.617 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:06.617 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:06.617 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:06.617 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:06.617 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:06.617 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1161570 00:12:06.617 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:06.617 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:06.617 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:06.617 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:06.617 00:12:06.617 real 0m2.689s 00:12:06.617 user 0m0.031s 00:12:06.617 sys 0m0.076s 00:12:06.617 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.617 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:06.617 ************************************ 00:12:06.617 END TEST filesystem_xfs 00:12:06.617 ************************************ 00:12:06.617 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:06.617 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:06.879 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.879 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:06.879 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:06.879 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:06.879 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.141 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:07.141 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.141 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:07.141 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.141 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.141 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.141 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.141 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:07.141 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1161570 00:12:07.141 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1161570 ']' 00:12:07.141 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1161570 00:12:07.141 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:07.141 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.141 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1161570 00:12:07.141 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:07.141 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:07.141 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1161570' 00:12:07.141 killing process with pid 1161570 00:12:07.141 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1161570 00:12:07.141 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1161570 00:12:07.402 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:07.402 00:12:07.402 real 0m19.393s 00:12:07.402 user 1m16.601s 00:12:07.402 sys 0m1.478s 00:12:07.402 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.402 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.402 ************************************ 00:12:07.402 END TEST nvmf_filesystem_no_in_capsule 00:12:07.402 ************************************ 00:12:07.402 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:07.402 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:07.402 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.402 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:07.402 ************************************ 00:12:07.402 START TEST nvmf_filesystem_in_capsule 00:12:07.402 ************************************ 00:12:07.403 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:07.403 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:07.403 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:07.403 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:07.403 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:07.403 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.403 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1165498 00:12:07.403 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1165498 00:12:07.403 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.403 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1165498 ']' 00:12:07.403 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.403 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.403 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.403 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.403 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.403 [2024-11-20 16:06:43.244334] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:12:07.403 [2024-11-20 16:06:43.244384] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.403 [2024-11-20 16:06:43.335802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.663 [2024-11-20 16:06:43.366747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.663 [2024-11-20 16:06:43.366773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.663 [2024-11-20 16:06:43.366782] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.663 [2024-11-20 16:06:43.366787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.663 [2024-11-20 16:06:43.366791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.663 [2024-11-20 16:06:43.367985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.663 [2024-11-20 16:06:43.368135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.663 [2024-11-20 16:06:43.368291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.663 [2024-11-20 16:06:43.368431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.236 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.236 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:08.236 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:08.236 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:08.236 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.236 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.236 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:08.236 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:08.236 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.236 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.236 [2024-11-20 16:06:44.093036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.236 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.236 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:08.236 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.236 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.497 Malloc1 00:12:08.497 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.497 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:08.497 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.497 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.497 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.497 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:08.497 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.498 [2024-11-20 16:06:44.218756] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:08.498 { 00:12:08.498 "name": "Malloc1", 00:12:08.498 "aliases": [ 00:12:08.498 "7011928e-0d8b-4394-80d4-75e0b32e59d7" 00:12:08.498 ], 00:12:08.498 "product_name": "Malloc disk", 00:12:08.498 "block_size": 512, 00:12:08.498 "num_blocks": 1048576, 00:12:08.498 "uuid": "7011928e-0d8b-4394-80d4-75e0b32e59d7", 00:12:08.498 "assigned_rate_limits": { 00:12:08.498 "rw_ios_per_sec": 0, 00:12:08.498 "rw_mbytes_per_sec": 0, 00:12:08.498 "r_mbytes_per_sec": 0, 00:12:08.498 "w_mbytes_per_sec": 0 00:12:08.498 }, 00:12:08.498 "claimed": true, 00:12:08.498 "claim_type": "exclusive_write", 00:12:08.498 "zoned": false, 00:12:08.498 "supported_io_types": { 00:12:08.498 "read": true, 00:12:08.498 "write": true, 00:12:08.498 "unmap": true, 00:12:08.498 "flush": true, 00:12:08.498 "reset": true, 00:12:08.498 "nvme_admin": false, 00:12:08.498 "nvme_io": false, 00:12:08.498 "nvme_io_md": false, 00:12:08.498 "write_zeroes": true, 00:12:08.498 "zcopy": true, 00:12:08.498 "get_zone_info": false, 00:12:08.498 "zone_management": false, 00:12:08.498 "zone_append": false, 00:12:08.498 "compare": false, 00:12:08.498 "compare_and_write": false, 00:12:08.498 "abort": true, 00:12:08.498 "seek_hole": false, 00:12:08.498 "seek_data": false, 00:12:08.498 "copy": true, 00:12:08.498 "nvme_iov_md": false 00:12:08.498 }, 00:12:08.498 "memory_domains": [ 00:12:08.498 { 00:12:08.498 "dma_device_id": "system", 00:12:08.498 "dma_device_type": 1 00:12:08.498 }, 00:12:08.498 { 00:12:08.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.498 "dma_device_type": 2 00:12:08.498 } 00:12:08.498 ], 00:12:08.498 "driver_specific": {} 00:12:08.498 } 00:12:08.498 ]' 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:08.498 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.411 16:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:10.411 16:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:10.411 16:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.411 16:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:10.411 16:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:12.327 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:12.327 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:12.327 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.327 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:12.327 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.327 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:12.327 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:12.327 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:12.327 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:12.327 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:12.327 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:12.327 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:12.327 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:12.327 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:12.327 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:12.327 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:12.327 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:12.327 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:12.587 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:13.528 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:13.528 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:13.528 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:13.528 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.528 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.528 ************************************ 00:12:13.528 START TEST filesystem_in_capsule_ext4 00:12:13.528 ************************************ 00:12:13.528 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:13.528 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:13.528 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:13.528 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:13.528 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:13.528 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:13.528 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:13.528 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:13.528 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:13.528 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:13.528 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:13.528 mke2fs 1.47.0 (5-Feb-2023) 00:12:13.528 Discarding device blocks: 0/522240 done 00:12:13.528 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:13.528 Filesystem UUID: cdd4910f-6592-4d1d-bee3-bda2d5b8c559 00:12:13.528 Superblock backups stored on blocks: 00:12:13.528 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:13.528 00:12:13.528 Allocating group tables: 0/64 done 00:12:13.528 Writing inode tables: 0/64 done 00:12:13.790 Creating journal (8192 blocks): done 00:12:16.248 Writing superblocks and filesystem accounting information: 0/64 done 00:12:16.248 00:12:16.248 16:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:16.248 16:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1165498 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:22.880 00:12:22.880 real 0m8.350s 00:12:22.880 user 0m0.044s 00:12:22.880 sys 0m0.067s 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:22.880 ************************************ 00:12:22.880 END TEST filesystem_in_capsule_ext4 00:12:22.880 ************************************ 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.880 ************************************ 00:12:22.880 START TEST filesystem_in_capsule_btrfs 00:12:22.880 ************************************ 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:22.880 btrfs-progs v6.8.1 00:12:22.880 See https://btrfs.readthedocs.io for more information. 00:12:22.880 00:12:22.880 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:22.880 NOTE: several default settings have changed in version 5.15, please make sure 00:12:22.880 this does not affect your deployments: 00:12:22.880 - DUP for metadata (-m dup) 00:12:22.880 - enabled no-holes (-O no-holes) 00:12:22.880 - enabled free-space-tree (-R free-space-tree) 00:12:22.880 00:12:22.880 Label: (null) 00:12:22.880 UUID: 7f7cdb50-ee5c-48e1-b291-5e3dccbcb56d 00:12:22.880 Node size: 16384 00:12:22.880 Sector size: 4096 (CPU page size: 4096) 00:12:22.880 Filesystem size: 510.00MiB 00:12:22.880 Block group profiles: 00:12:22.880 Data: single 8.00MiB 00:12:22.880 Metadata: DUP 32.00MiB 00:12:22.880 System: DUP 8.00MiB 00:12:22.880 SSD detected: yes 00:12:22.880 Zoned device: no 00:12:22.880 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:22.880 Checksum: crc32c 00:12:22.880 Number of devices: 1 00:12:22.880 Devices: 00:12:22.880 ID SIZE PATH 00:12:22.880 1 510.00MiB /dev/nvme0n1p1 00:12:22.880 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:22.880 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:23.141 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:23.141 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:23.141 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:23.141 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:23.141 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:23.141 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:23.401 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1165498 00:12:23.401 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:23.401 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:23.401 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:23.401 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:23.401 00:12:23.401 real 0m1.279s 00:12:23.401 user 0m0.020s 00:12:23.401 sys 0m0.125s 00:12:23.402 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.402 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:23.402 ************************************ 00:12:23.402 END TEST filesystem_in_capsule_btrfs 00:12:23.402 ************************************ 00:12:23.402 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:23.402 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:23.402 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.402 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.402 ************************************ 00:12:23.402 START TEST filesystem_in_capsule_xfs 00:12:23.402 ************************************ 00:12:23.402 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:23.402 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:23.402 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:23.402 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:23.402 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:23.402 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:23.402 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:23.402 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:23.402 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:23.402 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:23.402 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:23.402 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:23.402 = sectsz=512 attr=2, projid32bit=1 00:12:23.402 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:23.402 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:23.402 data = bsize=4096 blocks=130560, imaxpct=25 00:12:23.402 = sunit=0 swidth=0 blks 00:12:23.402 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:23.402 log =internal log bsize=4096 blocks=16384, version=2 00:12:23.402 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:23.402 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:24.343 Discarding blocks...Done. 00:12:24.343 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:24.343 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:26.887 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:26.887 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:26.887 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:26.887 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:26.887 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:26.887 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:26.887 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1165498 00:12:26.887 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:26.887 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:26.887 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:26.887 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:26.887 00:12:26.887 real 0m3.453s 00:12:26.887 user 0m0.032s 00:12:26.887 sys 0m0.075s 00:12:26.887 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.887 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:26.887 ************************************ 00:12:26.887 END TEST filesystem_in_capsule_xfs 00:12:26.887 ************************************ 00:12:26.887 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:26.887 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:26.887 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.887 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.887 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:26.887 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:26.887 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.148 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:27.148 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.148 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:27.148 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:27.148 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.148 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.148 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.148 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:27.148 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1165498 00:12:27.148 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1165498 ']' 00:12:27.148 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1165498 00:12:27.148 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:27.148 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.148 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1165498 00:12:27.148 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.148 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.148 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1165498' 00:12:27.148 killing process with pid 1165498 00:12:27.148 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1165498 00:12:27.148 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1165498 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:27.410 00:12:27.410 real 0m19.937s 00:12:27.410 user 1m18.904s 00:12:27.410 sys 0m1.410s 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.410 ************************************ 00:12:27.410 END TEST nvmf_filesystem_in_capsule 00:12:27.410 ************************************ 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:27.410 rmmod nvme_tcp 00:12:27.410 rmmod nvme_fabrics 00:12:27.410 rmmod nvme_keyring 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.410 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:29.958 00:12:29.958 real 0m49.742s 00:12:29.958 user 2m37.948s 00:12:29.958 sys 0m8.826s 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:29.958 ************************************ 00:12:29.958 END TEST nvmf_filesystem 00:12:29.958 ************************************ 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:29.958 ************************************ 00:12:29.958 START TEST nvmf_target_discovery 00:12:29.958 ************************************ 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:29.958 * Looking for test storage... 00:12:29.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:29.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.958 --rc genhtml_branch_coverage=1 00:12:29.958 --rc genhtml_function_coverage=1 00:12:29.958 --rc genhtml_legend=1 00:12:29.958 --rc geninfo_all_blocks=1 00:12:29.958 --rc geninfo_unexecuted_blocks=1 00:12:29.958 00:12:29.958 ' 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:29.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.958 --rc genhtml_branch_coverage=1 00:12:29.958 --rc genhtml_function_coverage=1 00:12:29.958 --rc genhtml_legend=1 00:12:29.958 --rc geninfo_all_blocks=1 00:12:29.958 --rc geninfo_unexecuted_blocks=1 00:12:29.958 00:12:29.958 ' 00:12:29.958 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:29.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.958 --rc genhtml_branch_coverage=1 00:12:29.958 --rc genhtml_function_coverage=1 00:12:29.959 --rc genhtml_legend=1 00:12:29.959 --rc geninfo_all_blocks=1 00:12:29.959 --rc geninfo_unexecuted_blocks=1 00:12:29.959 00:12:29.959 ' 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:29.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.959 --rc genhtml_branch_coverage=1 00:12:29.959 --rc genhtml_function_coverage=1 00:12:29.959 --rc genhtml_legend=1 00:12:29.959 --rc geninfo_all_blocks=1 00:12:29.959 --rc geninfo_unexecuted_blocks=1 00:12:29.959 00:12:29.959 ' 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:29.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:29.959 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.105 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.105 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:38.105 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:38.105 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:38.105 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:38.105 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:38.106 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:38.106 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:38.106 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:38.106 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.106 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:38.107 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:38.107 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.107 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.107 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:38.107 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:38.107 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.107 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.107 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.107 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.107 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:38.107 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:38.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:12:38.107 00:12:38.107 --- 10.0.0.2 ping statistics --- 00:12:38.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.107 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:12:38.107 00:12:38.107 --- 10.0.0.1 ping statistics --- 00:12:38.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.107 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1173760 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1173760 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1173760 ']' 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.107 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.107 [2024-11-20 16:07:13.177078] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:12:38.107 [2024-11-20 16:07:13.177148] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.107 [2024-11-20 16:07:13.279627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.107 [2024-11-20 16:07:13.333318] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.107 [2024-11-20 16:07:13.333373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.107 [2024-11-20 16:07:13.333383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.107 [2024-11-20 16:07:13.333390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.107 [2024-11-20 16:07:13.333397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.107 [2024-11-20 16:07:13.335458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.107 [2024-11-20 16:07:13.335616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.107 [2024-11-20 16:07:13.335781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.107 [2024-11-20 16:07:13.335782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.107 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.107 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:38.107 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:38.107 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:38.107 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.370 [2024-11-20 16:07:14.055985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.370 Null1 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.370 [2024-11-20 16:07:14.116463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.370 Null2 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.370 Null3 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.370 Null4 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.370 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:38.632 00:12:38.632 Discovery Log Number of Records 6, Generation counter 6 00:12:38.632 =====Discovery Log Entry 0====== 00:12:38.632 trtype: tcp 00:12:38.632 adrfam: ipv4 00:12:38.632 subtype: current discovery subsystem 00:12:38.632 treq: not required 00:12:38.632 portid: 0 00:12:38.632 trsvcid: 4420 00:12:38.632 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:38.632 traddr: 10.0.0.2 00:12:38.632 eflags: explicit discovery connections, duplicate discovery information 00:12:38.632 sectype: none 00:12:38.632 =====Discovery Log Entry 1====== 00:12:38.632 trtype: tcp 00:12:38.632 adrfam: ipv4 00:12:38.632 subtype: nvme subsystem 00:12:38.632 treq: not required 00:12:38.632 portid: 0 00:12:38.632 trsvcid: 4420 00:12:38.632 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:38.632 traddr: 10.0.0.2 00:12:38.632 eflags: none 00:12:38.632 sectype: none 00:12:38.632 =====Discovery Log Entry 2====== 00:12:38.632 trtype: tcp 00:12:38.632 adrfam: ipv4 00:12:38.632 subtype: nvme subsystem 00:12:38.632 treq: not required 00:12:38.632 portid: 0 00:12:38.632 trsvcid: 4420 00:12:38.632 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:38.632 traddr: 10.0.0.2 00:12:38.632 eflags: none 00:12:38.632 sectype: none 00:12:38.632 =====Discovery Log Entry 3====== 00:12:38.632 trtype: tcp 00:12:38.632 adrfam: ipv4 00:12:38.632 subtype: nvme subsystem 00:12:38.632 treq: not required 00:12:38.632 portid: 0 00:12:38.632 trsvcid: 4420 00:12:38.632 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:38.632 traddr: 10.0.0.2 00:12:38.632 eflags: none 00:12:38.632 sectype: none 00:12:38.632 =====Discovery Log Entry 4====== 00:12:38.632 trtype: tcp 00:12:38.632 adrfam: ipv4 00:12:38.632 subtype: nvme subsystem 00:12:38.632 treq: not required 00:12:38.632 portid: 0 00:12:38.632 trsvcid: 4420 00:12:38.632 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:38.632 traddr: 10.0.0.2 00:12:38.632 eflags: none 00:12:38.632 sectype: none 00:12:38.632 =====Discovery Log Entry 5====== 00:12:38.632 trtype: tcp 00:12:38.632 adrfam: ipv4 00:12:38.632 subtype: discovery subsystem referral 00:12:38.632 treq: not required 00:12:38.632 portid: 0 00:12:38.632 trsvcid: 4430 00:12:38.632 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:38.632 traddr: 10.0.0.2 00:12:38.632 eflags: none 00:12:38.632 sectype: none 00:12:38.632 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:38.632 Perform nvmf subsystem discovery via RPC 00:12:38.632 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:38.632 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.632 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.632 [ 00:12:38.632 { 00:12:38.632 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:38.632 "subtype": "Discovery", 00:12:38.632 "listen_addresses": [ 00:12:38.632 { 00:12:38.632 "trtype": "TCP", 00:12:38.632 "adrfam": "IPv4", 00:12:38.632 "traddr": "10.0.0.2", 00:12:38.632 "trsvcid": "4420" 00:12:38.632 } 00:12:38.632 ], 00:12:38.632 "allow_any_host": true, 00:12:38.632 "hosts": [] 00:12:38.632 }, 00:12:38.632 { 00:12:38.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:38.632 "subtype": "NVMe", 00:12:38.632 "listen_addresses": [ 00:12:38.632 { 00:12:38.632 "trtype": "TCP", 00:12:38.632 "adrfam": "IPv4", 00:12:38.632 "traddr": "10.0.0.2", 00:12:38.632 "trsvcid": "4420" 00:12:38.632 } 00:12:38.632 ], 00:12:38.632 "allow_any_host": true, 00:12:38.632 "hosts": [], 00:12:38.632 "serial_number": "SPDK00000000000001", 00:12:38.632 "model_number": "SPDK bdev Controller", 00:12:38.632 "max_namespaces": 32, 00:12:38.632 "min_cntlid": 1, 00:12:38.632 "max_cntlid": 65519, 00:12:38.632 "namespaces": [ 00:12:38.632 { 00:12:38.632 "nsid": 1, 00:12:38.632 "bdev_name": "Null1", 00:12:38.632 "name": "Null1", 00:12:38.632 "nguid": "8BF548316A4F447B9379A355E0D85603", 00:12:38.632 "uuid": "8bf54831-6a4f-447b-9379-a355e0d85603" 00:12:38.632 } 00:12:38.632 ] 00:12:38.632 }, 00:12:38.632 { 00:12:38.632 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:38.632 "subtype": "NVMe", 00:12:38.632 "listen_addresses": [ 00:12:38.632 { 00:12:38.632 "trtype": "TCP", 00:12:38.632 "adrfam": "IPv4", 00:12:38.632 "traddr": "10.0.0.2", 00:12:38.632 "trsvcid": "4420" 00:12:38.632 } 00:12:38.632 ], 00:12:38.632 "allow_any_host": true, 00:12:38.632 "hosts": [], 00:12:38.632 "serial_number": "SPDK00000000000002", 00:12:38.632 "model_number": "SPDK bdev Controller", 00:12:38.632 "max_namespaces": 32, 00:12:38.632 "min_cntlid": 1, 00:12:38.632 "max_cntlid": 65519, 00:12:38.632 "namespaces": [ 00:12:38.632 { 00:12:38.632 "nsid": 1, 00:12:38.632 "bdev_name": "Null2", 00:12:38.632 "name": "Null2", 00:12:38.632 "nguid": "CD98DF1DE09249F29BF262A29F212B66", 00:12:38.632 "uuid": "cd98df1d-e092-49f2-9bf2-62a29f212b66" 00:12:38.632 } 00:12:38.632 ] 00:12:38.632 }, 00:12:38.632 { 00:12:38.632 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:38.632 "subtype": "NVMe", 00:12:38.632 "listen_addresses": [ 00:12:38.632 { 00:12:38.632 "trtype": "TCP", 00:12:38.632 "adrfam": "IPv4", 00:12:38.632 "traddr": "10.0.0.2", 00:12:38.632 "trsvcid": "4420" 00:12:38.632 } 00:12:38.632 ], 00:12:38.632 "allow_any_host": true, 00:12:38.632 "hosts": [], 00:12:38.632 "serial_number": "SPDK00000000000003", 00:12:38.633 "model_number": "SPDK bdev Controller", 00:12:38.633 "max_namespaces": 32, 00:12:38.633 "min_cntlid": 1, 00:12:38.633 "max_cntlid": 65519, 00:12:38.633 "namespaces": [ 00:12:38.633 { 00:12:38.633 "nsid": 1, 00:12:38.633 "bdev_name": "Null3", 00:12:38.633 "name": "Null3", 00:12:38.633 "nguid": "E773DDDD11F9426B8F6A70C0383389D0", 00:12:38.633 "uuid": "e773dddd-11f9-426b-8f6a-70c0383389d0" 00:12:38.633 } 00:12:38.633 ] 00:12:38.633 }, 00:12:38.633 { 00:12:38.633 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:38.633 "subtype": "NVMe", 00:12:38.633 "listen_addresses": [ 00:12:38.633 { 00:12:38.633 "trtype": "TCP", 00:12:38.633 "adrfam": "IPv4", 00:12:38.633 "traddr": "10.0.0.2", 00:12:38.633 "trsvcid": "4420" 00:12:38.633 } 00:12:38.633 ], 00:12:38.633 "allow_any_host": true, 00:12:38.633 "hosts": [], 00:12:38.633 "serial_number": "SPDK00000000000004", 00:12:38.633 "model_number": "SPDK bdev Controller", 00:12:38.633 "max_namespaces": 32, 00:12:38.633 "min_cntlid": 1, 00:12:38.633 "max_cntlid": 65519, 00:12:38.633 "namespaces": [ 00:12:38.633 { 00:12:38.633 "nsid": 1, 00:12:38.633 "bdev_name": "Null4", 00:12:38.633 "name": "Null4", 00:12:38.633 "nguid": "4E69AD3D380F447EA7EC816F2723FDC2", 00:12:38.633 "uuid": "4e69ad3d-380f-447e-a7ec-816f2723fdc2" 00:12:38.633 } 00:12:38.633 ] 00:12:38.633 } 00:12:38.633 ] 00:12:38.633 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.633 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:38.633 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:38.633 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.633 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.633 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.633 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.633 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:38.633 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.633 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.894 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.895 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:38.895 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:38.895 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:38.895 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:38.895 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:38.895 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:38.895 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:38.895 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:38.895 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:38.895 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:38.895 rmmod nvme_tcp 00:12:38.895 rmmod nvme_fabrics 00:12:38.895 rmmod nvme_keyring 00:12:38.895 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:38.895 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:38.895 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:38.895 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1173760 ']' 00:12:38.895 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1173760 00:12:38.895 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1173760 ']' 00:12:38.895 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1173760 00:12:38.895 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:38.895 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.895 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1173760 00:12:39.157 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:39.157 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:39.157 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1173760' 00:12:39.157 killing process with pid 1173760 00:12:39.157 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1173760 00:12:39.157 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1173760 00:12:39.157 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:39.157 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:39.157 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:39.157 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:39.157 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:39.157 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:39.157 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:39.157 16:07:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:39.157 16:07:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:39.157 16:07:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.157 16:07:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.157 16:07:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.705 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:41.705 00:12:41.705 real 0m11.683s 00:12:41.705 user 0m9.026s 00:12:41.705 sys 0m6.087s 00:12:41.705 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.705 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.705 ************************************ 00:12:41.705 END TEST nvmf_target_discovery 00:12:41.705 ************************************ 00:12:41.705 16:07:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:41.705 16:07:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:41.705 16:07:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.705 16:07:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:41.705 ************************************ 00:12:41.705 START TEST nvmf_referrals 00:12:41.705 ************************************ 00:12:41.705 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:41.705 * Looking for test storage... 00:12:41.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:41.705 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:41.705 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:41.705 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:41.705 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:41.705 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:41.705 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:41.705 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:41.705 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.705 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:41.705 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:41.705 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:41.705 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:41.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.706 --rc genhtml_branch_coverage=1 00:12:41.706 --rc genhtml_function_coverage=1 00:12:41.706 --rc genhtml_legend=1 00:12:41.706 --rc geninfo_all_blocks=1 00:12:41.706 --rc geninfo_unexecuted_blocks=1 00:12:41.706 00:12:41.706 ' 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:41.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.706 --rc genhtml_branch_coverage=1 00:12:41.706 --rc genhtml_function_coverage=1 00:12:41.706 --rc genhtml_legend=1 00:12:41.706 --rc geninfo_all_blocks=1 00:12:41.706 --rc geninfo_unexecuted_blocks=1 00:12:41.706 00:12:41.706 ' 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:41.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.706 --rc genhtml_branch_coverage=1 00:12:41.706 --rc genhtml_function_coverage=1 00:12:41.706 --rc genhtml_legend=1 00:12:41.706 --rc geninfo_all_blocks=1 00:12:41.706 --rc geninfo_unexecuted_blocks=1 00:12:41.706 00:12:41.706 ' 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:41.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.706 --rc genhtml_branch_coverage=1 00:12:41.706 --rc genhtml_function_coverage=1 00:12:41.706 --rc genhtml_legend=1 00:12:41.706 --rc geninfo_all_blocks=1 00:12:41.706 --rc geninfo_unexecuted_blocks=1 00:12:41.706 00:12:41.706 ' 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:41.706 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:41.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:41.707 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.876 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:49.876 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:49.876 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:49.876 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:49.876 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:49.876 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:49.876 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:49.876 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:49.876 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:49.876 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:49.876 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:49.876 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:49.876 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:49.876 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:49.876 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:49.876 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:49.876 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:49.876 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:49.876 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:49.876 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:49.876 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:49.876 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:49.877 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:49.877 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:49.877 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:49.877 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:49.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:12:49.877 00:12:49.877 --- 10.0.0.2 ping statistics --- 00:12:49.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.877 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:49.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:12:49.877 00:12:49.877 --- 10.0.0.1 ping statistics --- 00:12:49.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.877 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:49.877 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.878 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:49.878 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:49.878 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:49.878 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:49.878 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:49.878 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.878 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1178447 00:12:49.878 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1178447 00:12:49.878 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:49.878 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1178447 ']' 00:12:49.878 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.878 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:49.878 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.878 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:49.878 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.878 [2024-11-20 16:07:25.035488] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:12:49.878 [2024-11-20 16:07:25.035554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.878 [2024-11-20 16:07:25.136467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:49.878 [2024-11-20 16:07:25.189066] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.878 [2024-11-20 16:07:25.189118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.878 [2024-11-20 16:07:25.189127] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.878 [2024-11-20 16:07:25.189134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.878 [2024-11-20 16:07:25.189141] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.878 [2024-11-20 16:07:25.191403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.878 [2024-11-20 16:07:25.191569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.878 [2024-11-20 16:07:25.191728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.878 [2024-11-20 16:07:25.191729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.140 [2024-11-20 16:07:25.911737] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.140 [2024-11-20 16:07:25.928070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.140 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.140 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:50.140 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:50.140 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:50.140 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:50.140 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:50.140 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.140 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.140 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:50.140 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.140 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:50.140 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:50.140 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:50.140 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:50.140 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:50.401 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:50.401 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:50.401 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:50.401 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:50.401 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:50.401 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:50.401 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.401 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.401 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.401 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:50.401 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.401 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.401 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.401 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:50.401 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.402 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.402 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.663 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:50.924 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.924 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:50.924 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:50.924 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:50.924 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:50.924 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:50.924 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:50.924 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:50.924 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:50.924 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:50.924 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:50.924 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:50.924 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:50.924 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:50.924 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:50.924 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:51.185 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:51.185 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:51.185 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:51.185 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:51.185 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:51.185 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:51.446 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:51.446 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:51.446 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.446 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.446 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.446 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:51.446 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:51.446 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:51.446 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:51.446 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.446 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.446 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:51.446 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.446 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:51.446 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:51.446 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:51.446 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:51.446 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:51.446 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:51.446 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:51.446 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:51.708 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:51.708 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:51.708 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:51.708 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:51.708 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:51.708 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:51.708 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:51.708 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:51.708 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:51.708 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:51.708 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:51.708 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:51.708 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:51.970 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:51.970 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:51.970 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.970 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.970 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.970 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:51.970 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:51.970 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.970 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.970 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.970 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:51.970 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:51.970 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:51.970 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:51.970 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:51.970 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:51.970 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:52.231 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:52.231 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:52.231 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:52.231 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:52.231 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:52.231 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:52.231 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:52.231 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:52.231 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:52.231 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:52.231 rmmod nvme_tcp 00:12:52.232 rmmod nvme_fabrics 00:12:52.232 rmmod nvme_keyring 00:12:52.232 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:52.232 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:52.232 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:52.232 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1178447 ']' 00:12:52.232 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1178447 00:12:52.232 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1178447 ']' 00:12:52.232 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1178447 00:12:52.232 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:52.232 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.232 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1178447 00:12:52.494 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.494 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.494 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1178447' 00:12:52.494 killing process with pid 1178447 00:12:52.494 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1178447 00:12:52.494 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1178447 00:12:52.494 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:52.494 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:52.494 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:52.494 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:52.494 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:52.494 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:52.494 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:52.494 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:52.494 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:52.494 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.494 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.494 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.050 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:55.050 00:12:55.050 real 0m13.218s 00:12:55.050 user 0m15.668s 00:12:55.050 sys 0m6.504s 00:12:55.050 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.050 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.050 ************************************ 00:12:55.050 END TEST nvmf_referrals 00:12:55.050 ************************************ 00:12:55.050 16:07:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:55.050 16:07:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:55.050 16:07:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.050 16:07:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:55.050 ************************************ 00:12:55.050 START TEST nvmf_connect_disconnect 00:12:55.050 ************************************ 00:12:55.050 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:55.050 * Looking for test storage... 00:12:55.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.050 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:55.050 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:55.050 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:55.050 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:55.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.051 --rc genhtml_branch_coverage=1 00:12:55.051 --rc genhtml_function_coverage=1 00:12:55.051 --rc genhtml_legend=1 00:12:55.051 --rc geninfo_all_blocks=1 00:12:55.051 --rc geninfo_unexecuted_blocks=1 00:12:55.051 00:12:55.051 ' 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:55.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.051 --rc genhtml_branch_coverage=1 00:12:55.051 --rc genhtml_function_coverage=1 00:12:55.051 --rc genhtml_legend=1 00:12:55.051 --rc geninfo_all_blocks=1 00:12:55.051 --rc geninfo_unexecuted_blocks=1 00:12:55.051 00:12:55.051 ' 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:55.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.051 --rc genhtml_branch_coverage=1 00:12:55.051 --rc genhtml_function_coverage=1 00:12:55.051 --rc genhtml_legend=1 00:12:55.051 --rc geninfo_all_blocks=1 00:12:55.051 --rc geninfo_unexecuted_blocks=1 00:12:55.051 00:12:55.051 ' 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:55.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.051 --rc genhtml_branch_coverage=1 00:12:55.051 --rc genhtml_function_coverage=1 00:12:55.051 --rc genhtml_legend=1 00:12:55.051 --rc geninfo_all_blocks=1 00:12:55.051 --rc geninfo_unexecuted_blocks=1 00:12:55.051 00:12:55.051 ' 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.051 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:55.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:55.052 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:03.260 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:03.260 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:03.260 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:03.260 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:03.260 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:03.261 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:03.261 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.261 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.261 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.261 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:03.261 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.261 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.261 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:03.261 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:03.261 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.261 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.261 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:03.261 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:03.261 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.261 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:03.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:13:03.261 00:13:03.261 --- 10.0.0.2 ping statistics --- 00:13:03.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.261 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:13:03.261 00:13:03.261 --- 10.0.0.1 ping statistics --- 00:13:03.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.261 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1183225 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1183225 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1183225 ']' 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.261 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.261 [2024-11-20 16:07:38.304526] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:13:03.261 [2024-11-20 16:07:38.304592] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.261 [2024-11-20 16:07:38.404927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:03.261 [2024-11-20 16:07:38.458436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.261 [2024-11-20 16:07:38.458487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.261 [2024-11-20 16:07:38.458496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.261 [2024-11-20 16:07:38.458503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.261 [2024-11-20 16:07:38.458515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.261 [2024-11-20 16:07:38.460578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.261 [2024-11-20 16:07:38.460743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.261 [2024-11-20 16:07:38.460906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.261 [2024-11-20 16:07:38.460908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.261 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.261 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:03.261 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:03.261 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:03.261 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.261 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.261 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:03.261 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.261 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.261 [2024-11-20 16:07:39.185641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.587 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.587 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:03.587 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.587 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.587 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.587 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:03.587 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:03.587 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.587 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.587 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.587 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:03.587 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.587 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.587 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.587 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.587 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.587 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.587 [2024-11-20 16:07:39.267966] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.587 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.587 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:03.587 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:03.587 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:07.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:21.925 rmmod nvme_tcp 00:13:21.925 rmmod nvme_fabrics 00:13:21.925 rmmod nvme_keyring 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1183225 ']' 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1183225 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1183225 ']' 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1183225 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1183225 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1183225' 00:13:21.925 killing process with pid 1183225 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1183225 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1183225 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.925 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.499 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:24.499 00:13:24.499 real 0m29.398s 00:13:24.499 user 1m19.263s 00:13:24.499 sys 0m7.069s 00:13:24.499 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.499 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:24.499 ************************************ 00:13:24.499 END TEST nvmf_connect_disconnect 00:13:24.499 ************************************ 00:13:24.499 16:07:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:24.499 16:07:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:24.499 16:07:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.499 16:07:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:24.499 ************************************ 00:13:24.499 START TEST nvmf_multitarget 00:13:24.499 ************************************ 00:13:24.499 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:24.499 * Looking for test storage... 00:13:24.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:24.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.499 --rc genhtml_branch_coverage=1 00:13:24.499 --rc genhtml_function_coverage=1 00:13:24.499 --rc genhtml_legend=1 00:13:24.499 --rc geninfo_all_blocks=1 00:13:24.499 --rc geninfo_unexecuted_blocks=1 00:13:24.499 00:13:24.499 ' 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:24.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.499 --rc genhtml_branch_coverage=1 00:13:24.499 --rc genhtml_function_coverage=1 00:13:24.499 --rc genhtml_legend=1 00:13:24.499 --rc geninfo_all_blocks=1 00:13:24.499 --rc geninfo_unexecuted_blocks=1 00:13:24.499 00:13:24.499 ' 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:24.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.499 --rc genhtml_branch_coverage=1 00:13:24.499 --rc genhtml_function_coverage=1 00:13:24.499 --rc genhtml_legend=1 00:13:24.499 --rc geninfo_all_blocks=1 00:13:24.499 --rc geninfo_unexecuted_blocks=1 00:13:24.499 00:13:24.499 ' 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:24.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.499 --rc genhtml_branch_coverage=1 00:13:24.499 --rc genhtml_function_coverage=1 00:13:24.499 --rc genhtml_legend=1 00:13:24.499 --rc geninfo_all_blocks=1 00:13:24.499 --rc geninfo_unexecuted_blocks=1 00:13:24.499 00:13:24.499 ' 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:24.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:24.499 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:32.647 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:32.647 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:32.647 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:32.648 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:32.648 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:32.648 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:32.648 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:32.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:13:32.648 00:13:32.648 --- 10.0.0.2 ping statistics --- 00:13:32.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.648 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:13:32.648 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:32.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:13:32.648 00:13:32.648 --- 10.0.0.1 ping statistics --- 00:13:32.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.649 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:13:32.649 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.649 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:13:32.649 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:32.649 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.649 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:32.649 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:32.649 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.649 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:32.649 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:32.649 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:32.649 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:32.649 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:32.649 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:32.649 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1191354 00:13:32.649 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1191354 00:13:32.649 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:32.649 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1191354 ']' 00:13:32.649 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.649 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:32.649 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.649 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:32.649 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:32.649 [2024-11-20 16:08:07.739611] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:13:32.649 [2024-11-20 16:08:07.739672] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.649 [2024-11-20 16:08:07.841901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:32.649 [2024-11-20 16:08:07.895231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.649 [2024-11-20 16:08:07.895285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.649 [2024-11-20 16:08:07.895294] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:32.649 [2024-11-20 16:08:07.895301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:32.649 [2024-11-20 16:08:07.895307] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.649 [2024-11-20 16:08:07.897664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.649 [2024-11-20 16:08:07.897822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.649 [2024-11-20 16:08:07.897987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.649 [2024-11-20 16:08:07.897987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:32.649 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:32.649 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:13:32.649 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:32.649 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:32.649 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:32.911 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.911 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:32.911 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:32.911 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:32.911 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:32.911 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:32.911 "nvmf_tgt_1" 00:13:33.172 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:33.172 "nvmf_tgt_2" 00:13:33.172 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:33.172 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:33.172 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:33.172 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:33.433 true 00:13:33.433 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:33.433 true 00:13:33.433 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:33.433 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:33.693 rmmod nvme_tcp 00:13:33.693 rmmod nvme_fabrics 00:13:33.693 rmmod nvme_keyring 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1191354 ']' 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1191354 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1191354 ']' 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1191354 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1191354 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1191354' 00:13:33.693 killing process with pid 1191354 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1191354 00:13:33.693 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1191354 00:13:33.955 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:33.955 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:33.955 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:33.955 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:33.955 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:33.955 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:33.955 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:33.955 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:33.955 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:33.955 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.955 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.955 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.872 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:36.133 00:13:36.133 real 0m11.855s 00:13:36.133 user 0m10.223s 00:13:36.133 sys 0m6.254s 00:13:36.133 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.133 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:36.133 ************************************ 00:13:36.133 END TEST nvmf_multitarget 00:13:36.133 ************************************ 00:13:36.133 16:08:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:36.133 16:08:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:36.133 16:08:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.133 16:08:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:36.133 ************************************ 00:13:36.133 START TEST nvmf_rpc 00:13:36.133 ************************************ 00:13:36.133 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:36.133 * Looking for test storage... 00:13:36.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.133 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:36.133 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:36.133 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:36.396 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:36.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.397 --rc genhtml_branch_coverage=1 00:13:36.397 --rc genhtml_function_coverage=1 00:13:36.397 --rc genhtml_legend=1 00:13:36.397 --rc geninfo_all_blocks=1 00:13:36.397 --rc geninfo_unexecuted_blocks=1 00:13:36.397 00:13:36.397 ' 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:36.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.397 --rc genhtml_branch_coverage=1 00:13:36.397 --rc genhtml_function_coverage=1 00:13:36.397 --rc genhtml_legend=1 00:13:36.397 --rc geninfo_all_blocks=1 00:13:36.397 --rc geninfo_unexecuted_blocks=1 00:13:36.397 00:13:36.397 ' 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:36.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.397 --rc genhtml_branch_coverage=1 00:13:36.397 --rc genhtml_function_coverage=1 00:13:36.397 --rc genhtml_legend=1 00:13:36.397 --rc geninfo_all_blocks=1 00:13:36.397 --rc geninfo_unexecuted_blocks=1 00:13:36.397 00:13:36.397 ' 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:36.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.397 --rc genhtml_branch_coverage=1 00:13:36.397 --rc genhtml_function_coverage=1 00:13:36.397 --rc genhtml_legend=1 00:13:36.397 --rc geninfo_all_blocks=1 00:13:36.397 --rc geninfo_unexecuted_blocks=1 00:13:36.397 00:13:36.397 ' 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:36.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:36.397 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:36.398 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:36.398 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:36.398 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.398 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:36.398 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:36.398 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:36.398 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.398 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.398 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.398 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:36.398 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:36.398 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:36.398 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:44.542 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:44.542 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:44.542 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:44.542 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.542 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:44.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:13:44.543 00:13:44.543 --- 10.0.0.2 ping statistics --- 00:13:44.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.543 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:44.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:13:44.543 00:13:44.543 --- 10.0.0.1 ping statistics --- 00:13:44.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.543 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1195980 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1195980 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1195980 ']' 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.543 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.543 [2024-11-20 16:08:19.800562] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:13:44.543 [2024-11-20 16:08:19.800634] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.543 [2024-11-20 16:08:19.902265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:44.543 [2024-11-20 16:08:19.955056] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.543 [2024-11-20 16:08:19.955132] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.543 [2024-11-20 16:08:19.955141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.543 [2024-11-20 16:08:19.955148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.543 [2024-11-20 16:08:19.955154] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.543 [2024-11-20 16:08:19.957210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.543 [2024-11-20 16:08:19.957334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.543 [2024-11-20 16:08:19.957495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:44.543 [2024-11-20 16:08:19.957496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.804 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:44.805 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:44.805 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:44.805 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:44.805 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.805 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.805 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:44.805 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.805 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.805 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.805 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:44.805 "tick_rate": 2400000000, 00:13:44.805 "poll_groups": [ 00:13:44.805 { 00:13:44.805 "name": "nvmf_tgt_poll_group_000", 00:13:44.805 "admin_qpairs": 0, 00:13:44.805 "io_qpairs": 0, 00:13:44.805 "current_admin_qpairs": 0, 00:13:44.805 "current_io_qpairs": 0, 00:13:44.805 "pending_bdev_io": 0, 00:13:44.805 "completed_nvme_io": 0, 00:13:44.805 "transports": [] 00:13:44.805 }, 00:13:44.805 { 00:13:44.805 "name": "nvmf_tgt_poll_group_001", 00:13:44.805 "admin_qpairs": 0, 00:13:44.805 "io_qpairs": 0, 00:13:44.805 "current_admin_qpairs": 0, 00:13:44.805 "current_io_qpairs": 0, 00:13:44.805 "pending_bdev_io": 0, 00:13:44.805 "completed_nvme_io": 0, 00:13:44.805 "transports": [] 00:13:44.805 }, 00:13:44.805 { 00:13:44.805 "name": "nvmf_tgt_poll_group_002", 00:13:44.805 "admin_qpairs": 0, 00:13:44.805 "io_qpairs": 0, 00:13:44.805 "current_admin_qpairs": 0, 00:13:44.805 "current_io_qpairs": 0, 00:13:44.805 "pending_bdev_io": 0, 00:13:44.805 "completed_nvme_io": 0, 00:13:44.805 "transports": [] 00:13:44.805 }, 00:13:44.805 { 00:13:44.805 "name": "nvmf_tgt_poll_group_003", 00:13:44.805 "admin_qpairs": 0, 00:13:44.805 "io_qpairs": 0, 00:13:44.805 "current_admin_qpairs": 0, 00:13:44.805 "current_io_qpairs": 0, 00:13:44.805 "pending_bdev_io": 0, 00:13:44.805 "completed_nvme_io": 0, 00:13:44.805 "transports": [] 00:13:44.805 } 00:13:44.805 ] 00:13:44.805 }' 00:13:44.805 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:44.805 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:44.805 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:44.805 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:45.066 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:45.066 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:45.066 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:45.066 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:45.066 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.066 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.066 [2024-11-20 16:08:20.807134] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.066 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.066 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:45.066 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.066 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.066 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.066 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:45.066 "tick_rate": 2400000000, 00:13:45.066 "poll_groups": [ 00:13:45.066 { 00:13:45.066 "name": "nvmf_tgt_poll_group_000", 00:13:45.066 "admin_qpairs": 0, 00:13:45.066 "io_qpairs": 0, 00:13:45.066 "current_admin_qpairs": 0, 00:13:45.066 "current_io_qpairs": 0, 00:13:45.066 "pending_bdev_io": 0, 00:13:45.066 "completed_nvme_io": 0, 00:13:45.066 "transports": [ 00:13:45.066 { 00:13:45.066 "trtype": "TCP" 00:13:45.066 } 00:13:45.066 ] 00:13:45.066 }, 00:13:45.066 { 00:13:45.066 "name": "nvmf_tgt_poll_group_001", 00:13:45.066 "admin_qpairs": 0, 00:13:45.066 "io_qpairs": 0, 00:13:45.066 "current_admin_qpairs": 0, 00:13:45.066 "current_io_qpairs": 0, 00:13:45.066 "pending_bdev_io": 0, 00:13:45.066 "completed_nvme_io": 0, 00:13:45.066 "transports": [ 00:13:45.066 { 00:13:45.066 "trtype": "TCP" 00:13:45.066 } 00:13:45.066 ] 00:13:45.066 }, 00:13:45.066 { 00:13:45.066 "name": "nvmf_tgt_poll_group_002", 00:13:45.066 "admin_qpairs": 0, 00:13:45.066 "io_qpairs": 0, 00:13:45.066 "current_admin_qpairs": 0, 00:13:45.066 "current_io_qpairs": 0, 00:13:45.066 "pending_bdev_io": 0, 00:13:45.066 "completed_nvme_io": 0, 00:13:45.066 "transports": [ 00:13:45.066 { 00:13:45.066 "trtype": "TCP" 00:13:45.066 } 00:13:45.066 ] 00:13:45.066 }, 00:13:45.066 { 00:13:45.066 "name": "nvmf_tgt_poll_group_003", 00:13:45.066 "admin_qpairs": 0, 00:13:45.066 "io_qpairs": 0, 00:13:45.066 "current_admin_qpairs": 0, 00:13:45.066 "current_io_qpairs": 0, 00:13:45.066 "pending_bdev_io": 0, 00:13:45.066 "completed_nvme_io": 0, 00:13:45.066 "transports": [ 00:13:45.066 { 00:13:45.066 "trtype": "TCP" 00:13:45.066 } 00:13:45.066 ] 00:13:45.066 } 00:13:45.066 ] 00:13:45.066 }' 00:13:45.066 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:45.066 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:45.066 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:45.066 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:45.066 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:45.067 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:45.067 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:45.067 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:45.067 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:45.067 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:45.067 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:45.067 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:45.067 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:45.067 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:45.067 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.067 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.067 Malloc1 00:13:45.067 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.067 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:45.067 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.067 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.067 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.067 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:45.067 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.067 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.067 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.067 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:45.067 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.327 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.327 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.327 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.327 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.327 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.327 [2024-11-20 16:08:21.019459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.327 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.327 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:45.327 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:45.327 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:45.327 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:45.327 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.327 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:45.328 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.328 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:45.328 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.328 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:45.328 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:45.328 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:45.328 [2024-11-20 16:08:21.056552] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:45.328 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:45.328 could not add new controller: failed to write to nvme-fabrics device 00:13:45.328 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:45.328 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:45.328 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:45.328 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:45.328 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:45.328 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.328 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.328 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.328 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:47.242 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:47.242 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:47.242 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:47.242 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:47.242 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:49.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:49.155 [2024-11-20 16:08:24.853502] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:49.155 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:49.155 could not add new controller: failed to write to nvme-fabrics device 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.155 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.156 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:50.540 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:50.540 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:50.540 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:50.540 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:50.540 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:53.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.082 [2024-11-20 16:08:28.619248] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.082 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:54.465 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:54.465 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:54.465 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:54.465 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:54.465 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:56.375 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:56.375 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:56.375 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:56.375 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:56.375 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:56.375 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:56.375 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:56.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.375 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:56.375 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:56.375 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:56.375 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:56.375 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:56.375 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:56.375 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:56.375 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:56.375 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.375 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.649 [2024-11-20 16:08:32.343104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.649 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:58.111 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:58.111 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:58.111 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:58.111 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:58.111 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:00.027 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:00.027 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:00.027 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:00.027 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:00.027 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:00.027 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:00.027 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:00.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.293 [2024-11-20 16:08:36.096868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.293 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.294 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.294 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:00.294 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.294 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.294 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.294 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:01.680 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:01.680 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:01.680 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:01.680 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:01.680 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:04.221 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:04.221 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:04.221 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:04.221 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:04.221 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:04.221 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:04.221 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:04.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.221 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:04.221 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:04.221 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:04.221 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:04.221 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:04.221 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:04.221 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:04.221 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:04.221 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.221 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.221 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.221 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:04.221 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.221 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.221 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.222 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:04.222 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:04.222 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.222 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.222 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.222 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:04.222 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.222 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.222 [2024-11-20 16:08:39.815896] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.222 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.222 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:04.222 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.222 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.222 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.222 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:04.222 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.222 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.222 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.222 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:05.609 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:05.609 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:05.609 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:05.609 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:05.609 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:07.521 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:07.521 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:07.521 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:07.521 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:07.521 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:07.521 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:07.521 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:07.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.782 [2024-11-20 16:08:43.580854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.782 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:09.167 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:09.167 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:09.167 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:09.167 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:09.167 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:11.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.710 [2024-11-20 16:08:47.313083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:11.710 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 [2024-11-20 16:08:47.373231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 [2024-11-20 16:08:47.445438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 [2024-11-20 16:08:47.517659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 [2024-11-20 16:08:47.585856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.711 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.971 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.971 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:11.971 "tick_rate": 2400000000, 00:14:11.971 "poll_groups": [ 00:14:11.971 { 00:14:11.971 "name": "nvmf_tgt_poll_group_000", 00:14:11.971 "admin_qpairs": 0, 00:14:11.971 "io_qpairs": 224, 00:14:11.971 "current_admin_qpairs": 0, 00:14:11.971 "current_io_qpairs": 0, 00:14:11.971 "pending_bdev_io": 0, 00:14:11.971 "completed_nvme_io": 382, 00:14:11.971 "transports": [ 00:14:11.971 { 00:14:11.971 "trtype": "TCP" 00:14:11.971 } 00:14:11.971 ] 00:14:11.971 }, 00:14:11.971 { 00:14:11.971 "name": "nvmf_tgt_poll_group_001", 00:14:11.971 "admin_qpairs": 1, 00:14:11.971 "io_qpairs": 223, 00:14:11.971 "current_admin_qpairs": 0, 00:14:11.971 "current_io_qpairs": 0, 00:14:11.971 "pending_bdev_io": 0, 00:14:11.971 "completed_nvme_io": 389, 00:14:11.971 "transports": [ 00:14:11.971 { 00:14:11.971 "trtype": "TCP" 00:14:11.971 } 00:14:11.971 ] 00:14:11.971 }, 00:14:11.971 { 00:14:11.971 "name": "nvmf_tgt_poll_group_002", 00:14:11.971 "admin_qpairs": 6, 00:14:11.971 "io_qpairs": 218, 00:14:11.971 "current_admin_qpairs": 0, 00:14:11.971 "current_io_qpairs": 0, 00:14:11.971 "pending_bdev_io": 0, 00:14:11.971 "completed_nvme_io": 218, 00:14:11.971 "transports": [ 00:14:11.972 { 00:14:11.972 "trtype": "TCP" 00:14:11.972 } 00:14:11.972 ] 00:14:11.972 }, 00:14:11.972 { 00:14:11.972 "name": "nvmf_tgt_poll_group_003", 00:14:11.972 "admin_qpairs": 0, 00:14:11.972 "io_qpairs": 224, 00:14:11.972 "current_admin_qpairs": 0, 00:14:11.972 "current_io_qpairs": 0, 00:14:11.972 "pending_bdev_io": 0, 00:14:11.972 "completed_nvme_io": 250, 00:14:11.972 "transports": [ 00:14:11.972 { 00:14:11.972 "trtype": "TCP" 00:14:11.972 } 00:14:11.972 ] 00:14:11.972 } 00:14:11.972 ] 00:14:11.972 }' 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:11.972 rmmod nvme_tcp 00:14:11.972 rmmod nvme_fabrics 00:14:11.972 rmmod nvme_keyring 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1195980 ']' 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1195980 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1195980 ']' 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1195980 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1195980 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1195980' 00:14:11.972 killing process with pid 1195980 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1195980 00:14:11.972 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1195980 00:14:12.232 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:12.232 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:12.232 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:12.232 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:14:12.232 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:14:12.232 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:12.232 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:14:12.232 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:12.232 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:12.232 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.232 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.232 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.778 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:14.778 00:14:14.778 real 0m38.197s 00:14:14.778 user 1m54.186s 00:14:14.778 sys 0m7.975s 00:14:14.778 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.778 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.778 ************************************ 00:14:14.778 END TEST nvmf_rpc 00:14:14.778 ************************************ 00:14:14.778 16:08:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:14.778 16:08:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:14.778 16:08:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.778 16:08:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:14.778 ************************************ 00:14:14.778 START TEST nvmf_invalid 00:14:14.778 ************************************ 00:14:14.778 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:14.778 * Looking for test storage... 00:14:14.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:14.778 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:14.778 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:14:14.778 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:14.778 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:14.778 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:14.778 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:14.778 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:14.778 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:14.778 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:14.778 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:14.778 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:14.778 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:14.778 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:14.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.779 --rc genhtml_branch_coverage=1 00:14:14.779 --rc genhtml_function_coverage=1 00:14:14.779 --rc genhtml_legend=1 00:14:14.779 --rc geninfo_all_blocks=1 00:14:14.779 --rc geninfo_unexecuted_blocks=1 00:14:14.779 00:14:14.779 ' 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:14.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.779 --rc genhtml_branch_coverage=1 00:14:14.779 --rc genhtml_function_coverage=1 00:14:14.779 --rc genhtml_legend=1 00:14:14.779 --rc geninfo_all_blocks=1 00:14:14.779 --rc geninfo_unexecuted_blocks=1 00:14:14.779 00:14:14.779 ' 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:14.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.779 --rc genhtml_branch_coverage=1 00:14:14.779 --rc genhtml_function_coverage=1 00:14:14.779 --rc genhtml_legend=1 00:14:14.779 --rc geninfo_all_blocks=1 00:14:14.779 --rc geninfo_unexecuted_blocks=1 00:14:14.779 00:14:14.779 ' 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:14.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.779 --rc genhtml_branch_coverage=1 00:14:14.779 --rc genhtml_function_coverage=1 00:14:14.779 --rc genhtml_legend=1 00:14:14.779 --rc geninfo_all_blocks=1 00:14:14.779 --rc geninfo_unexecuted_blocks=1 00:14:14.779 00:14:14.779 ' 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.779 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:14.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:14:14.780 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:22.917 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:22.917 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:22.917 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:22.917 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:22.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:22.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:14:22.917 00:14:22.917 --- 10.0.0.2 ping statistics --- 00:14:22.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.917 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:14:22.917 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:22.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:22.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:14:22.918 00:14:22.918 --- 10.0.0.1 ping statistics --- 00:14:22.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.918 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:14:22.918 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:22.918 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:14:22.918 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:22.918 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:22.918 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:22.918 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:22.918 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:22.918 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:22.918 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:22.918 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:22.918 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:22.918 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:22.918 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:22.918 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1205701 00:14:22.918 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1205701 00:14:22.918 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:22.918 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1205701 ']' 00:14:22.918 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.918 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:22.918 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.918 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:22.918 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:22.918 [2024-11-20 16:08:58.036147] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:14:22.918 [2024-11-20 16:08:58.036251] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.918 [2024-11-20 16:08:58.140168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:22.918 [2024-11-20 16:08:58.193410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.918 [2024-11-20 16:08:58.193461] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.918 [2024-11-20 16:08:58.193470] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.918 [2024-11-20 16:08:58.193477] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.918 [2024-11-20 16:08:58.193483] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.918 [2024-11-20 16:08:58.195480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.918 [2024-11-20 16:08:58.195641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.918 [2024-11-20 16:08:58.195807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.918 [2024-11-20 16:08:58.195807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:23.179 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.179 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:14:23.179 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:23.179 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:23.179 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:23.179 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.179 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:23.179 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6824 00:14:23.179 [2024-11-20 16:08:59.069108] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:23.179 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:23.179 { 00:14:23.179 "nqn": "nqn.2016-06.io.spdk:cnode6824", 00:14:23.179 "tgt_name": "foobar", 00:14:23.179 "method": "nvmf_create_subsystem", 00:14:23.179 "req_id": 1 00:14:23.179 } 00:14:23.179 Got JSON-RPC error response 00:14:23.179 response: 00:14:23.179 { 00:14:23.179 "code": -32603, 00:14:23.179 "message": "Unable to find target foobar" 00:14:23.179 }' 00:14:23.179 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:23.179 { 00:14:23.179 "nqn": "nqn.2016-06.io.spdk:cnode6824", 00:14:23.179 "tgt_name": "foobar", 00:14:23.179 "method": "nvmf_create_subsystem", 00:14:23.179 "req_id": 1 00:14:23.179 } 00:14:23.179 Got JSON-RPC error response 00:14:23.179 response: 00:14:23.179 { 00:14:23.179 "code": -32603, 00:14:23.179 "message": "Unable to find target foobar" 00:14:23.179 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:23.179 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:23.179 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode16868 00:14:23.439 [2024-11-20 16:08:59.277968] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16868: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:23.439 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:23.439 { 00:14:23.439 "nqn": "nqn.2016-06.io.spdk:cnode16868", 00:14:23.439 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:23.439 "method": "nvmf_create_subsystem", 00:14:23.439 "req_id": 1 00:14:23.439 } 00:14:23.439 Got JSON-RPC error response 00:14:23.439 response: 00:14:23.439 { 00:14:23.439 "code": -32602, 00:14:23.439 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:23.439 }' 00:14:23.439 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:23.439 { 00:14:23.439 "nqn": "nqn.2016-06.io.spdk:cnode16868", 00:14:23.439 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:23.439 "method": "nvmf_create_subsystem", 00:14:23.439 "req_id": 1 00:14:23.439 } 00:14:23.439 Got JSON-RPC error response 00:14:23.439 response: 00:14:23.439 { 00:14:23.439 "code": -32602, 00:14:23.439 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:23.439 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:23.439 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:23.439 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode9387 00:14:23.703 [2024-11-20 16:08:59.482699] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9387: invalid model number 'SPDK_Controller' 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:23.703 { 00:14:23.703 "nqn": "nqn.2016-06.io.spdk:cnode9387", 00:14:23.703 "model_number": "SPDK_Controller\u001f", 00:14:23.703 "method": "nvmf_create_subsystem", 00:14:23.703 "req_id": 1 00:14:23.703 } 00:14:23.703 Got JSON-RPC error response 00:14:23.703 response: 00:14:23.703 { 00:14:23.703 "code": -32602, 00:14:23.703 "message": "Invalid MN SPDK_Controller\u001f" 00:14:23.703 }' 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:23.703 { 00:14:23.703 "nqn": "nqn.2016-06.io.spdk:cnode9387", 00:14:23.703 "model_number": "SPDK_Controller\u001f", 00:14:23.703 "method": "nvmf_create_subsystem", 00:14:23.703 "req_id": 1 00:14:23.703 } 00:14:23.703 Got JSON-RPC error response 00:14:23.703 response: 00:14:23.703 { 00:14:23.703 "code": -32602, 00:14:23.703 "message": "Invalid MN SPDK_Controller\u001f" 00:14:23.703 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:23.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:23.704 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:23.704 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:23.704 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ b == \- ]] 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'b)Pi%7Wasa2'\''-Az[nUXxS' 00:14:23.966 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'b)Pi%7Wasa2'\''-Az[nUXxS' nqn.2016-06.io.spdk:cnode27584 00:14:23.966 [2024-11-20 16:08:59.868192] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27584: invalid serial number 'b)Pi%7Wasa2'-Az[nUXxS' 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:24.229 { 00:14:24.229 "nqn": "nqn.2016-06.io.spdk:cnode27584", 00:14:24.229 "serial_number": "b)Pi%7Wasa2'\''-Az[nUXxS", 00:14:24.229 "method": "nvmf_create_subsystem", 00:14:24.229 "req_id": 1 00:14:24.229 } 00:14:24.229 Got JSON-RPC error response 00:14:24.229 response: 00:14:24.229 { 00:14:24.229 "code": -32602, 00:14:24.229 "message": "Invalid SN b)Pi%7Wasa2'\''-Az[nUXxS" 00:14:24.229 }' 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:24.229 { 00:14:24.229 "nqn": "nqn.2016-06.io.spdk:cnode27584", 00:14:24.229 "serial_number": "b)Pi%7Wasa2'-Az[nUXxS", 00:14:24.229 "method": "nvmf_create_subsystem", 00:14:24.229 "req_id": 1 00:14:24.229 } 00:14:24.229 Got JSON-RPC error response 00:14:24.229 response: 00:14:24.229 { 00:14:24.229 "code": -32602, 00:14:24.229 "message": "Invalid SN b)Pi%7Wasa2'-Az[nUXxS" 00:14:24.229 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.229 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.229 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.230 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ == \- ]] 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ' ,Hu;vzkC*i5D>{Q]55h#Ef;"XFA5t[ LnKoU@tHo' 00:14:24.492 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ' ,Hu;vzkC*i5D>{Q]55h#Ef;"XFA5t[ LnKoU@tHo' nqn.2016-06.io.spdk:cnode31445 00:14:24.492 [2024-11-20 16:09:00.418239] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31445: invalid model number ' ,Hu;vzkC*i5D>{Q]55h#Ef;"XFA5t[ LnKoU@tHo' 00:14:24.754 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:24.754 { 00:14:24.754 "nqn": "nqn.2016-06.io.spdk:cnode31445", 00:14:24.754 "model_number": " ,Hu;vzkC*i5D>{Q]55h#Ef;\"XFA5t[ LnKoU@tHo", 00:14:24.754 "method": "nvmf_create_subsystem", 00:14:24.754 "req_id": 1 00:14:24.754 } 00:14:24.754 Got JSON-RPC error response 00:14:24.754 response: 00:14:24.754 { 00:14:24.754 "code": -32602, 00:14:24.754 "message": "Invalid MN ,Hu;vzkC*i5D>{Q]55h#Ef;\"XFA5t[ LnKoU@tHo" 00:14:24.754 }' 00:14:24.754 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:24.754 { 00:14:24.754 "nqn": "nqn.2016-06.io.spdk:cnode31445", 00:14:24.754 "model_number": " ,Hu;vzkC*i5D>{Q]55h#Ef;\"XFA5t[ LnKoU@tHo", 00:14:24.754 "method": "nvmf_create_subsystem", 00:14:24.754 "req_id": 1 00:14:24.754 } 00:14:24.754 Got JSON-RPC error response 00:14:24.754 response: 00:14:24.754 { 00:14:24.754 "code": -32602, 00:14:24.754 "message": "Invalid MN ,Hu;vzkC*i5D>{Q]55h#Ef;\"XFA5t[ LnKoU@tHo" 00:14:24.754 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:24.754 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:24.754 [2024-11-20 16:09:00.627118] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.754 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:25.015 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:25.015 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:25.015 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:25.015 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:25.015 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:25.275 [2024-11-20 16:09:01.032492] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:25.275 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:25.275 { 00:14:25.275 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:25.275 "listen_address": { 00:14:25.275 "trtype": "tcp", 00:14:25.275 "traddr": "", 00:14:25.275 "trsvcid": "4421" 00:14:25.275 }, 00:14:25.275 "method": "nvmf_subsystem_remove_listener", 00:14:25.275 "req_id": 1 00:14:25.275 } 00:14:25.275 Got JSON-RPC error response 00:14:25.275 response: 00:14:25.275 { 00:14:25.275 "code": -32602, 00:14:25.275 "message": "Invalid parameters" 00:14:25.275 }' 00:14:25.275 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:25.275 { 00:14:25.275 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:25.275 "listen_address": { 00:14:25.275 "trtype": "tcp", 00:14:25.275 "traddr": "", 00:14:25.275 "trsvcid": "4421" 00:14:25.275 }, 00:14:25.275 "method": "nvmf_subsystem_remove_listener", 00:14:25.275 "req_id": 1 00:14:25.275 } 00:14:25.275 Got JSON-RPC error response 00:14:25.275 response: 00:14:25.275 { 00:14:25.275 "code": -32602, 00:14:25.275 "message": "Invalid parameters" 00:14:25.275 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:25.275 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6411 -i 0 00:14:25.535 [2024-11-20 16:09:01.221089] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6411: invalid cntlid range [0-65519] 00:14:25.535 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:25.535 { 00:14:25.535 "nqn": "nqn.2016-06.io.spdk:cnode6411", 00:14:25.535 "min_cntlid": 0, 00:14:25.535 "method": "nvmf_create_subsystem", 00:14:25.535 "req_id": 1 00:14:25.535 } 00:14:25.535 Got JSON-RPC error response 00:14:25.535 response: 00:14:25.535 { 00:14:25.535 "code": -32602, 00:14:25.535 "message": "Invalid cntlid range [0-65519]" 00:14:25.535 }' 00:14:25.535 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:25.535 { 00:14:25.535 "nqn": "nqn.2016-06.io.spdk:cnode6411", 00:14:25.535 "min_cntlid": 0, 00:14:25.535 "method": "nvmf_create_subsystem", 00:14:25.535 "req_id": 1 00:14:25.535 } 00:14:25.535 Got JSON-RPC error response 00:14:25.535 response: 00:14:25.535 { 00:14:25.535 "code": -32602, 00:14:25.535 "message": "Invalid cntlid range [0-65519]" 00:14:25.535 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:25.535 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27549 -i 65520 00:14:25.535 [2024-11-20 16:09:01.409633] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27549: invalid cntlid range [65520-65519] 00:14:25.535 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:25.535 { 00:14:25.535 "nqn": "nqn.2016-06.io.spdk:cnode27549", 00:14:25.535 "min_cntlid": 65520, 00:14:25.535 "method": "nvmf_create_subsystem", 00:14:25.535 "req_id": 1 00:14:25.535 } 00:14:25.535 Got JSON-RPC error response 00:14:25.535 response: 00:14:25.535 { 00:14:25.535 "code": -32602, 00:14:25.535 "message": "Invalid cntlid range [65520-65519]" 00:14:25.535 }' 00:14:25.535 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:25.535 { 00:14:25.535 "nqn": "nqn.2016-06.io.spdk:cnode27549", 00:14:25.535 "min_cntlid": 65520, 00:14:25.535 "method": "nvmf_create_subsystem", 00:14:25.535 "req_id": 1 00:14:25.535 } 00:14:25.535 Got JSON-RPC error response 00:14:25.535 response: 00:14:25.535 { 00:14:25.535 "code": -32602, 00:14:25.535 "message": "Invalid cntlid range [65520-65519]" 00:14:25.535 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:25.535 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14060 -I 0 00:14:25.795 [2024-11-20 16:09:01.598219] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14060: invalid cntlid range [1-0] 00:14:25.795 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:25.795 { 00:14:25.795 "nqn": "nqn.2016-06.io.spdk:cnode14060", 00:14:25.795 "max_cntlid": 0, 00:14:25.795 "method": "nvmf_create_subsystem", 00:14:25.795 "req_id": 1 00:14:25.795 } 00:14:25.795 Got JSON-RPC error response 00:14:25.795 response: 00:14:25.795 { 00:14:25.795 "code": -32602, 00:14:25.795 "message": "Invalid cntlid range [1-0]" 00:14:25.795 }' 00:14:25.795 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:25.795 { 00:14:25.795 "nqn": "nqn.2016-06.io.spdk:cnode14060", 00:14:25.795 "max_cntlid": 0, 00:14:25.795 "method": "nvmf_create_subsystem", 00:14:25.795 "req_id": 1 00:14:25.795 } 00:14:25.795 Got JSON-RPC error response 00:14:25.795 response: 00:14:25.795 { 00:14:25.795 "code": -32602, 00:14:25.795 "message": "Invalid cntlid range [1-0]" 00:14:25.795 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:25.795 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32079 -I 65520 00:14:26.056 [2024-11-20 16:09:01.786822] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32079: invalid cntlid range [1-65520] 00:14:26.056 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:26.056 { 00:14:26.056 "nqn": "nqn.2016-06.io.spdk:cnode32079", 00:14:26.056 "max_cntlid": 65520, 00:14:26.056 "method": "nvmf_create_subsystem", 00:14:26.056 "req_id": 1 00:14:26.056 } 00:14:26.056 Got JSON-RPC error response 00:14:26.056 response: 00:14:26.056 { 00:14:26.056 "code": -32602, 00:14:26.056 "message": "Invalid cntlid range [1-65520]" 00:14:26.056 }' 00:14:26.056 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:26.056 { 00:14:26.056 "nqn": "nqn.2016-06.io.spdk:cnode32079", 00:14:26.056 "max_cntlid": 65520, 00:14:26.056 "method": "nvmf_create_subsystem", 00:14:26.056 "req_id": 1 00:14:26.056 } 00:14:26.056 Got JSON-RPC error response 00:14:26.056 response: 00:14:26.056 { 00:14:26.056 "code": -32602, 00:14:26.056 "message": "Invalid cntlid range [1-65520]" 00:14:26.056 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:26.056 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32509 -i 6 -I 5 00:14:26.056 [2024-11-20 16:09:01.975419] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32509: invalid cntlid range [6-5] 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:26.316 { 00:14:26.316 "nqn": "nqn.2016-06.io.spdk:cnode32509", 00:14:26.316 "min_cntlid": 6, 00:14:26.316 "max_cntlid": 5, 00:14:26.316 "method": "nvmf_create_subsystem", 00:14:26.316 "req_id": 1 00:14:26.316 } 00:14:26.316 Got JSON-RPC error response 00:14:26.316 response: 00:14:26.316 { 00:14:26.316 "code": -32602, 00:14:26.316 "message": "Invalid cntlid range [6-5]" 00:14:26.316 }' 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:26.316 { 00:14:26.316 "nqn": "nqn.2016-06.io.spdk:cnode32509", 00:14:26.316 "min_cntlid": 6, 00:14:26.316 "max_cntlid": 5, 00:14:26.316 "method": "nvmf_create_subsystem", 00:14:26.316 "req_id": 1 00:14:26.316 } 00:14:26.316 Got JSON-RPC error response 00:14:26.316 response: 00:14:26.316 { 00:14:26.316 "code": -32602, 00:14:26.316 "message": "Invalid cntlid range [6-5]" 00:14:26.316 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:26.316 { 00:14:26.316 "name": "foobar", 00:14:26.316 "method": "nvmf_delete_target", 00:14:26.316 "req_id": 1 00:14:26.316 } 00:14:26.316 Got JSON-RPC error response 00:14:26.316 response: 00:14:26.316 { 00:14:26.316 "code": -32602, 00:14:26.316 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:26.316 }' 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:26.316 { 00:14:26.316 "name": "foobar", 00:14:26.316 "method": "nvmf_delete_target", 00:14:26.316 "req_id": 1 00:14:26.316 } 00:14:26.316 Got JSON-RPC error response 00:14:26.316 response: 00:14:26.316 { 00:14:26.316 "code": -32602, 00:14:26.316 "message": "The specified target doesn't exist, cannot delete it." 00:14:26.316 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:26.316 rmmod nvme_tcp 00:14:26.316 rmmod nvme_fabrics 00:14:26.316 rmmod nvme_keyring 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1205701 ']' 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1205701 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1205701 ']' 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1205701 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.316 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1205701 00:14:26.577 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:26.577 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:26.578 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1205701' 00:14:26.578 killing process with pid 1205701 00:14:26.578 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1205701 00:14:26.578 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1205701 00:14:26.578 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:26.578 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:26.578 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:26.578 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:14:26.578 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:26.578 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:14:26.578 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:14:26.578 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:26.578 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:26.578 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.578 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:26.578 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.120 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:29.120 00:14:29.120 real 0m14.260s 00:14:29.120 user 0m21.276s 00:14:29.120 sys 0m6.824s 00:14:29.120 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.120 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:29.120 ************************************ 00:14:29.120 END TEST nvmf_invalid 00:14:29.120 ************************************ 00:14:29.120 16:09:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:29.120 16:09:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:29.120 16:09:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.120 16:09:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:29.120 ************************************ 00:14:29.120 START TEST nvmf_connect_stress 00:14:29.120 ************************************ 00:14:29.120 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:29.120 * Looking for test storage... 00:14:29.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:29.120 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:29.120 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:14:29.120 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:29.120 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:29.120 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:29.120 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:29.120 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:29.120 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:29.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.121 --rc genhtml_branch_coverage=1 00:14:29.121 --rc genhtml_function_coverage=1 00:14:29.121 --rc genhtml_legend=1 00:14:29.121 --rc geninfo_all_blocks=1 00:14:29.121 --rc geninfo_unexecuted_blocks=1 00:14:29.121 00:14:29.121 ' 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:29.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.121 --rc genhtml_branch_coverage=1 00:14:29.121 --rc genhtml_function_coverage=1 00:14:29.121 --rc genhtml_legend=1 00:14:29.121 --rc geninfo_all_blocks=1 00:14:29.121 --rc geninfo_unexecuted_blocks=1 00:14:29.121 00:14:29.121 ' 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:29.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.121 --rc genhtml_branch_coverage=1 00:14:29.121 --rc genhtml_function_coverage=1 00:14:29.121 --rc genhtml_legend=1 00:14:29.121 --rc geninfo_all_blocks=1 00:14:29.121 --rc geninfo_unexecuted_blocks=1 00:14:29.121 00:14:29.121 ' 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:29.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.121 --rc genhtml_branch_coverage=1 00:14:29.121 --rc genhtml_function_coverage=1 00:14:29.121 --rc genhtml_legend=1 00:14:29.121 --rc geninfo_all_blocks=1 00:14:29.121 --rc geninfo_unexecuted_blocks=1 00:14:29.121 00:14:29.121 ' 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:29.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.121 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:29.122 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.122 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:29.122 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:29.122 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:29.122 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:37.274 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:37.274 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:37.274 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:37.274 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:37.275 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:37.275 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:37.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:14:37.275 00:14:37.275 --- 10.0.0.2 ping statistics --- 00:14:37.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.275 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:37.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:14:37.275 00:14:37.275 --- 10.0.0.1 ping statistics --- 00:14:37.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.275 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1211430 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1211430 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1211430 ']' 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:37.275 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.275 [2024-11-20 16:09:12.228077] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:14:37.275 [2024-11-20 16:09:12.228145] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.275 [2024-11-20 16:09:12.326391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:37.275 [2024-11-20 16:09:12.365357] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.275 [2024-11-20 16:09:12.365397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.275 [2024-11-20 16:09:12.365403] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:37.275 [2024-11-20 16:09:12.365409] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:37.275 [2024-11-20 16:09:12.365414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.275 [2024-11-20 16:09:12.366818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.275 [2024-11-20 16:09:12.366970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.275 [2024-11-20 16:09:12.366972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.275 [2024-11-20 16:09:13.079383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.275 [2024-11-20 16:09:13.103759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.275 NULL1 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1211705 00:14:37.275 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.276 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.536 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.536 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.536 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.536 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.536 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:37.536 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.537 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.537 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.797 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.797 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:37.797 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.797 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.797 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.058 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.058 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:38.058 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.058 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.058 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.317 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.317 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:38.317 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.317 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.317 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.885 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.885 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:38.885 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.885 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.885 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.144 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.144 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:39.144 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.144 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.144 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.404 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.404 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:39.404 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.404 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.404 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.663 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.663 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:39.663 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.663 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.663 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.922 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.922 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:39.922 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.922 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.922 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.564 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.564 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:40.564 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.564 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.564 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.827 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.827 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:40.827 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.827 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.827 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.088 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.088 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:41.088 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.088 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.088 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.348 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.348 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:41.348 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.348 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.348 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.608 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.608 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:41.608 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.608 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.608 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.868 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.868 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:41.868 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.868 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.868 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.439 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.439 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:42.439 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.439 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.439 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.699 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.699 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:42.699 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.699 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.699 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.958 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.959 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:42.959 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.959 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.959 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.219 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.219 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:43.219 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.219 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.219 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.479 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.479 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:43.479 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.479 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.479 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.051 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.051 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:44.051 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.051 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.051 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.313 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.313 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:44.313 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.313 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.313 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.574 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.574 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:44.574 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.574 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.574 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.835 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.835 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:44.835 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.835 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.835 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.095 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.095 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:45.095 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.095 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.095 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.667 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.667 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:45.667 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.667 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.667 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.928 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.928 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:45.928 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.928 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.928 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.189 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.189 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:46.189 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.189 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.189 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:46.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.023 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.023 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:47.023 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.023 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.023 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.285 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.285 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:47.285 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.285 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.285 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.547 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1211705 00:14:47.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1211705) - No such process 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1211705 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:47.547 rmmod nvme_tcp 00:14:47.547 rmmod nvme_fabrics 00:14:47.547 rmmod nvme_keyring 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1211430 ']' 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1211430 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1211430 ']' 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1211430 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1211430 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1211430' 00:14:47.547 killing process with pid 1211430 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1211430 00:14:47.547 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1211430 00:14:47.809 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:47.809 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:47.809 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:47.809 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:47.809 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:47.809 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:47.809 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:47.809 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:47.809 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:47.809 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.809 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:47.809 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.721 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:49.721 00:14:49.721 real 0m21.123s 00:14:49.721 user 0m42.214s 00:14:49.721 sys 0m9.108s 00:14:49.721 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:49.721 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.721 ************************************ 00:14:49.721 END TEST nvmf_connect_stress 00:14:49.721 ************************************ 00:14:49.981 16:09:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:49.981 16:09:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:49.981 16:09:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:49.981 16:09:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:49.981 ************************************ 00:14:49.981 START TEST nvmf_fused_ordering 00:14:49.981 ************************************ 00:14:49.981 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:49.981 * Looking for test storage... 00:14:49.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:49.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.982 --rc genhtml_branch_coverage=1 00:14:49.982 --rc genhtml_function_coverage=1 00:14:49.982 --rc genhtml_legend=1 00:14:49.982 --rc geninfo_all_blocks=1 00:14:49.982 --rc geninfo_unexecuted_blocks=1 00:14:49.982 00:14:49.982 ' 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:49.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.982 --rc genhtml_branch_coverage=1 00:14:49.982 --rc genhtml_function_coverage=1 00:14:49.982 --rc genhtml_legend=1 00:14:49.982 --rc geninfo_all_blocks=1 00:14:49.982 --rc geninfo_unexecuted_blocks=1 00:14:49.982 00:14:49.982 ' 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:49.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.982 --rc genhtml_branch_coverage=1 00:14:49.982 --rc genhtml_function_coverage=1 00:14:49.982 --rc genhtml_legend=1 00:14:49.982 --rc geninfo_all_blocks=1 00:14:49.982 --rc geninfo_unexecuted_blocks=1 00:14:49.982 00:14:49.982 ' 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:49.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.982 --rc genhtml_branch_coverage=1 00:14:49.982 --rc genhtml_function_coverage=1 00:14:49.982 --rc genhtml_legend=1 00:14:49.982 --rc geninfo_all_blocks=1 00:14:49.982 --rc geninfo_unexecuted_blocks=1 00:14:49.982 00:14:49.982 ' 00:14:49.982 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.244 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.245 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.245 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:50.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:50.245 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:50.245 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:50.245 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:50.245 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:50.245 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:50.245 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.245 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:50.245 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:50.245 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:50.245 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.245 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.245 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.245 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:50.245 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:50.245 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:50.245 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:58.384 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:58.385 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:58.385 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:58.385 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:58.385 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:58.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:14:58.385 00:14:58.385 --- 10.0.0.2 ping statistics --- 00:14:58.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.385 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:58.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:14:58.385 00:14:58.385 --- 10.0.0.1 ping statistics --- 00:14:58.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.385 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1217915 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1217915 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1217915 ']' 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.385 16:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:58.385 [2024-11-20 16:09:33.508564] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:14:58.385 [2024-11-20 16:09:33.508633] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.385 [2024-11-20 16:09:33.609573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.386 [2024-11-20 16:09:33.660877] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.386 [2024-11-20 16:09:33.660931] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.386 [2024-11-20 16:09:33.660940] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:58.386 [2024-11-20 16:09:33.660947] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:58.386 [2024-11-20 16:09:33.660953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.386 [2024-11-20 16:09:33.661733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.646 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:58.646 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:58.646 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:58.647 [2024-11-20 16:09:34.379935] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:58.647 [2024-11-20 16:09:34.404285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:58.647 NULL1 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.647 16:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:58.647 [2024-11-20 16:09:34.474989] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:14:58.647 [2024-11-20 16:09:34.475033] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1218096 ] 00:14:59.218 Attached to nqn.2016-06.io.spdk:cnode1 00:14:59.218 Namespace ID: 1 size: 1GB 00:14:59.218 fused_ordering(0) 00:14:59.218 fused_ordering(1) 00:14:59.218 fused_ordering(2) 00:14:59.218 fused_ordering(3) 00:14:59.218 fused_ordering(4) 00:14:59.218 fused_ordering(5) 00:14:59.218 fused_ordering(6) 00:14:59.218 fused_ordering(7) 00:14:59.218 fused_ordering(8) 00:14:59.218 fused_ordering(9) 00:14:59.218 fused_ordering(10) 00:14:59.218 fused_ordering(11) 00:14:59.218 fused_ordering(12) 00:14:59.218 fused_ordering(13) 00:14:59.219 fused_ordering(14) 00:14:59.219 fused_ordering(15) 00:14:59.219 fused_ordering(16) 00:14:59.219 fused_ordering(17) 00:14:59.219 fused_ordering(18) 00:14:59.219 fused_ordering(19) 00:14:59.219 fused_ordering(20) 00:14:59.219 fused_ordering(21) 00:14:59.219 fused_ordering(22) 00:14:59.219 fused_ordering(23) 00:14:59.219 fused_ordering(24) 00:14:59.219 fused_ordering(25) 00:14:59.219 fused_ordering(26) 00:14:59.219 fused_ordering(27) 00:14:59.219 fused_ordering(28) 00:14:59.219 fused_ordering(29) 00:14:59.219 fused_ordering(30) 00:14:59.219 fused_ordering(31) 00:14:59.219 fused_ordering(32) 00:14:59.219 fused_ordering(33) 00:14:59.219 fused_ordering(34) 00:14:59.219 fused_ordering(35) 00:14:59.219 fused_ordering(36) 00:14:59.219 fused_ordering(37) 00:14:59.219 fused_ordering(38) 00:14:59.219 fused_ordering(39) 00:14:59.219 fused_ordering(40) 00:14:59.219 fused_ordering(41) 00:14:59.219 fused_ordering(42) 00:14:59.219 fused_ordering(43) 00:14:59.219 fused_ordering(44) 00:14:59.219 fused_ordering(45) 00:14:59.219 fused_ordering(46) 00:14:59.219 fused_ordering(47) 00:14:59.219 fused_ordering(48) 00:14:59.219 fused_ordering(49) 00:14:59.219 fused_ordering(50) 00:14:59.219 fused_ordering(51) 00:14:59.219 fused_ordering(52) 00:14:59.219 fused_ordering(53) 00:14:59.219 fused_ordering(54) 00:14:59.219 fused_ordering(55) 00:14:59.219 fused_ordering(56) 00:14:59.219 fused_ordering(57) 00:14:59.219 fused_ordering(58) 00:14:59.219 fused_ordering(59) 00:14:59.219 fused_ordering(60) 00:14:59.219 fused_ordering(61) 00:14:59.219 fused_ordering(62) 00:14:59.219 fused_ordering(63) 00:14:59.219 fused_ordering(64) 00:14:59.219 fused_ordering(65) 00:14:59.219 fused_ordering(66) 00:14:59.219 fused_ordering(67) 00:14:59.219 fused_ordering(68) 00:14:59.219 fused_ordering(69) 00:14:59.219 fused_ordering(70) 00:14:59.219 fused_ordering(71) 00:14:59.219 fused_ordering(72) 00:14:59.219 fused_ordering(73) 00:14:59.219 fused_ordering(74) 00:14:59.219 fused_ordering(75) 00:14:59.219 fused_ordering(76) 00:14:59.219 fused_ordering(77) 00:14:59.219 fused_ordering(78) 00:14:59.219 fused_ordering(79) 00:14:59.219 fused_ordering(80) 00:14:59.219 fused_ordering(81) 00:14:59.219 fused_ordering(82) 00:14:59.219 fused_ordering(83) 00:14:59.219 fused_ordering(84) 00:14:59.219 fused_ordering(85) 00:14:59.219 fused_ordering(86) 00:14:59.219 fused_ordering(87) 00:14:59.219 fused_ordering(88) 00:14:59.219 fused_ordering(89) 00:14:59.219 fused_ordering(90) 00:14:59.219 fused_ordering(91) 00:14:59.219 fused_ordering(92) 00:14:59.219 fused_ordering(93) 00:14:59.219 fused_ordering(94) 00:14:59.219 fused_ordering(95) 00:14:59.219 fused_ordering(96) 00:14:59.219 fused_ordering(97) 00:14:59.219 fused_ordering(98) 00:14:59.219 fused_ordering(99) 00:14:59.219 fused_ordering(100) 00:14:59.219 fused_ordering(101) 00:14:59.219 fused_ordering(102) 00:14:59.219 fused_ordering(103) 00:14:59.219 fused_ordering(104) 00:14:59.219 fused_ordering(105) 00:14:59.219 fused_ordering(106) 00:14:59.219 fused_ordering(107) 00:14:59.219 fused_ordering(108) 00:14:59.219 fused_ordering(109) 00:14:59.219 fused_ordering(110) 00:14:59.219 fused_ordering(111) 00:14:59.219 fused_ordering(112) 00:14:59.219 fused_ordering(113) 00:14:59.219 fused_ordering(114) 00:14:59.219 fused_ordering(115) 00:14:59.219 fused_ordering(116) 00:14:59.219 fused_ordering(117) 00:14:59.219 fused_ordering(118) 00:14:59.219 fused_ordering(119) 00:14:59.219 fused_ordering(120) 00:14:59.219 fused_ordering(121) 00:14:59.219 fused_ordering(122) 00:14:59.219 fused_ordering(123) 00:14:59.219 fused_ordering(124) 00:14:59.219 fused_ordering(125) 00:14:59.219 fused_ordering(126) 00:14:59.219 fused_ordering(127) 00:14:59.219 fused_ordering(128) 00:14:59.219 fused_ordering(129) 00:14:59.219 fused_ordering(130) 00:14:59.219 fused_ordering(131) 00:14:59.219 fused_ordering(132) 00:14:59.219 fused_ordering(133) 00:14:59.219 fused_ordering(134) 00:14:59.219 fused_ordering(135) 00:14:59.219 fused_ordering(136) 00:14:59.219 fused_ordering(137) 00:14:59.219 fused_ordering(138) 00:14:59.219 fused_ordering(139) 00:14:59.219 fused_ordering(140) 00:14:59.219 fused_ordering(141) 00:14:59.219 fused_ordering(142) 00:14:59.219 fused_ordering(143) 00:14:59.219 fused_ordering(144) 00:14:59.219 fused_ordering(145) 00:14:59.219 fused_ordering(146) 00:14:59.219 fused_ordering(147) 00:14:59.219 fused_ordering(148) 00:14:59.219 fused_ordering(149) 00:14:59.219 fused_ordering(150) 00:14:59.219 fused_ordering(151) 00:14:59.219 fused_ordering(152) 00:14:59.219 fused_ordering(153) 00:14:59.219 fused_ordering(154) 00:14:59.219 fused_ordering(155) 00:14:59.219 fused_ordering(156) 00:14:59.219 fused_ordering(157) 00:14:59.219 fused_ordering(158) 00:14:59.219 fused_ordering(159) 00:14:59.219 fused_ordering(160) 00:14:59.219 fused_ordering(161) 00:14:59.219 fused_ordering(162) 00:14:59.219 fused_ordering(163) 00:14:59.219 fused_ordering(164) 00:14:59.219 fused_ordering(165) 00:14:59.219 fused_ordering(166) 00:14:59.219 fused_ordering(167) 00:14:59.219 fused_ordering(168) 00:14:59.219 fused_ordering(169) 00:14:59.219 fused_ordering(170) 00:14:59.219 fused_ordering(171) 00:14:59.219 fused_ordering(172) 00:14:59.219 fused_ordering(173) 00:14:59.219 fused_ordering(174) 00:14:59.219 fused_ordering(175) 00:14:59.219 fused_ordering(176) 00:14:59.219 fused_ordering(177) 00:14:59.219 fused_ordering(178) 00:14:59.219 fused_ordering(179) 00:14:59.219 fused_ordering(180) 00:14:59.219 fused_ordering(181) 00:14:59.219 fused_ordering(182) 00:14:59.219 fused_ordering(183) 00:14:59.219 fused_ordering(184) 00:14:59.219 fused_ordering(185) 00:14:59.219 fused_ordering(186) 00:14:59.219 fused_ordering(187) 00:14:59.219 fused_ordering(188) 00:14:59.219 fused_ordering(189) 00:14:59.219 fused_ordering(190) 00:14:59.219 fused_ordering(191) 00:14:59.219 fused_ordering(192) 00:14:59.219 fused_ordering(193) 00:14:59.219 fused_ordering(194) 00:14:59.219 fused_ordering(195) 00:14:59.219 fused_ordering(196) 00:14:59.219 fused_ordering(197) 00:14:59.219 fused_ordering(198) 00:14:59.219 fused_ordering(199) 00:14:59.219 fused_ordering(200) 00:14:59.219 fused_ordering(201) 00:14:59.219 fused_ordering(202) 00:14:59.219 fused_ordering(203) 00:14:59.219 fused_ordering(204) 00:14:59.219 fused_ordering(205) 00:14:59.481 fused_ordering(206) 00:14:59.481 fused_ordering(207) 00:14:59.481 fused_ordering(208) 00:14:59.481 fused_ordering(209) 00:14:59.481 fused_ordering(210) 00:14:59.481 fused_ordering(211) 00:14:59.481 fused_ordering(212) 00:14:59.481 fused_ordering(213) 00:14:59.481 fused_ordering(214) 00:14:59.481 fused_ordering(215) 00:14:59.481 fused_ordering(216) 00:14:59.481 fused_ordering(217) 00:14:59.481 fused_ordering(218) 00:14:59.481 fused_ordering(219) 00:14:59.481 fused_ordering(220) 00:14:59.481 fused_ordering(221) 00:14:59.481 fused_ordering(222) 00:14:59.481 fused_ordering(223) 00:14:59.481 fused_ordering(224) 00:14:59.481 fused_ordering(225) 00:14:59.481 fused_ordering(226) 00:14:59.481 fused_ordering(227) 00:14:59.481 fused_ordering(228) 00:14:59.481 fused_ordering(229) 00:14:59.481 fused_ordering(230) 00:14:59.481 fused_ordering(231) 00:14:59.481 fused_ordering(232) 00:14:59.481 fused_ordering(233) 00:14:59.481 fused_ordering(234) 00:14:59.481 fused_ordering(235) 00:14:59.481 fused_ordering(236) 00:14:59.481 fused_ordering(237) 00:14:59.481 fused_ordering(238) 00:14:59.481 fused_ordering(239) 00:14:59.481 fused_ordering(240) 00:14:59.481 fused_ordering(241) 00:14:59.481 fused_ordering(242) 00:14:59.481 fused_ordering(243) 00:14:59.481 fused_ordering(244) 00:14:59.481 fused_ordering(245) 00:14:59.481 fused_ordering(246) 00:14:59.481 fused_ordering(247) 00:14:59.481 fused_ordering(248) 00:14:59.481 fused_ordering(249) 00:14:59.481 fused_ordering(250) 00:14:59.481 fused_ordering(251) 00:14:59.481 fused_ordering(252) 00:14:59.481 fused_ordering(253) 00:14:59.481 fused_ordering(254) 00:14:59.481 fused_ordering(255) 00:14:59.481 fused_ordering(256) 00:14:59.481 fused_ordering(257) 00:14:59.481 fused_ordering(258) 00:14:59.481 fused_ordering(259) 00:14:59.481 fused_ordering(260) 00:14:59.481 fused_ordering(261) 00:14:59.481 fused_ordering(262) 00:14:59.481 fused_ordering(263) 00:14:59.481 fused_ordering(264) 00:14:59.481 fused_ordering(265) 00:14:59.481 fused_ordering(266) 00:14:59.481 fused_ordering(267) 00:14:59.481 fused_ordering(268) 00:14:59.481 fused_ordering(269) 00:14:59.481 fused_ordering(270) 00:14:59.481 fused_ordering(271) 00:14:59.481 fused_ordering(272) 00:14:59.481 fused_ordering(273) 00:14:59.481 fused_ordering(274) 00:14:59.481 fused_ordering(275) 00:14:59.481 fused_ordering(276) 00:14:59.481 fused_ordering(277) 00:14:59.481 fused_ordering(278) 00:14:59.481 fused_ordering(279) 00:14:59.481 fused_ordering(280) 00:14:59.481 fused_ordering(281) 00:14:59.481 fused_ordering(282) 00:14:59.481 fused_ordering(283) 00:14:59.481 fused_ordering(284) 00:14:59.481 fused_ordering(285) 00:14:59.481 fused_ordering(286) 00:14:59.481 fused_ordering(287) 00:14:59.481 fused_ordering(288) 00:14:59.481 fused_ordering(289) 00:14:59.481 fused_ordering(290) 00:14:59.481 fused_ordering(291) 00:14:59.481 fused_ordering(292) 00:14:59.481 fused_ordering(293) 00:14:59.481 fused_ordering(294) 00:14:59.481 fused_ordering(295) 00:14:59.481 fused_ordering(296) 00:14:59.481 fused_ordering(297) 00:14:59.481 fused_ordering(298) 00:14:59.481 fused_ordering(299) 00:14:59.481 fused_ordering(300) 00:14:59.481 fused_ordering(301) 00:14:59.481 fused_ordering(302) 00:14:59.481 fused_ordering(303) 00:14:59.481 fused_ordering(304) 00:14:59.481 fused_ordering(305) 00:14:59.481 fused_ordering(306) 00:14:59.481 fused_ordering(307) 00:14:59.481 fused_ordering(308) 00:14:59.481 fused_ordering(309) 00:14:59.481 fused_ordering(310) 00:14:59.481 fused_ordering(311) 00:14:59.481 fused_ordering(312) 00:14:59.481 fused_ordering(313) 00:14:59.481 fused_ordering(314) 00:14:59.481 fused_ordering(315) 00:14:59.481 fused_ordering(316) 00:14:59.481 fused_ordering(317) 00:14:59.481 fused_ordering(318) 00:14:59.481 fused_ordering(319) 00:14:59.481 fused_ordering(320) 00:14:59.481 fused_ordering(321) 00:14:59.481 fused_ordering(322) 00:14:59.481 fused_ordering(323) 00:14:59.481 fused_ordering(324) 00:14:59.481 fused_ordering(325) 00:14:59.481 fused_ordering(326) 00:14:59.481 fused_ordering(327) 00:14:59.481 fused_ordering(328) 00:14:59.481 fused_ordering(329) 00:14:59.481 fused_ordering(330) 00:14:59.481 fused_ordering(331) 00:14:59.481 fused_ordering(332) 00:14:59.482 fused_ordering(333) 00:14:59.482 fused_ordering(334) 00:14:59.482 fused_ordering(335) 00:14:59.482 fused_ordering(336) 00:14:59.482 fused_ordering(337) 00:14:59.482 fused_ordering(338) 00:14:59.482 fused_ordering(339) 00:14:59.482 fused_ordering(340) 00:14:59.482 fused_ordering(341) 00:14:59.482 fused_ordering(342) 00:14:59.482 fused_ordering(343) 00:14:59.482 fused_ordering(344) 00:14:59.482 fused_ordering(345) 00:14:59.482 fused_ordering(346) 00:14:59.482 fused_ordering(347) 00:14:59.482 fused_ordering(348) 00:14:59.482 fused_ordering(349) 00:14:59.482 fused_ordering(350) 00:14:59.482 fused_ordering(351) 00:14:59.482 fused_ordering(352) 00:14:59.482 fused_ordering(353) 00:14:59.482 fused_ordering(354) 00:14:59.482 fused_ordering(355) 00:14:59.482 fused_ordering(356) 00:14:59.482 fused_ordering(357) 00:14:59.482 fused_ordering(358) 00:14:59.482 fused_ordering(359) 00:14:59.482 fused_ordering(360) 00:14:59.482 fused_ordering(361) 00:14:59.482 fused_ordering(362) 00:14:59.482 fused_ordering(363) 00:14:59.482 fused_ordering(364) 00:14:59.482 fused_ordering(365) 00:14:59.482 fused_ordering(366) 00:14:59.482 fused_ordering(367) 00:14:59.482 fused_ordering(368) 00:14:59.482 fused_ordering(369) 00:14:59.482 fused_ordering(370) 00:14:59.482 fused_ordering(371) 00:14:59.482 fused_ordering(372) 00:14:59.482 fused_ordering(373) 00:14:59.482 fused_ordering(374) 00:14:59.482 fused_ordering(375) 00:14:59.482 fused_ordering(376) 00:14:59.482 fused_ordering(377) 00:14:59.482 fused_ordering(378) 00:14:59.482 fused_ordering(379) 00:14:59.482 fused_ordering(380) 00:14:59.482 fused_ordering(381) 00:14:59.482 fused_ordering(382) 00:14:59.482 fused_ordering(383) 00:14:59.482 fused_ordering(384) 00:14:59.482 fused_ordering(385) 00:14:59.482 fused_ordering(386) 00:14:59.482 fused_ordering(387) 00:14:59.482 fused_ordering(388) 00:14:59.482 fused_ordering(389) 00:14:59.482 fused_ordering(390) 00:14:59.482 fused_ordering(391) 00:14:59.482 fused_ordering(392) 00:14:59.482 fused_ordering(393) 00:14:59.482 fused_ordering(394) 00:14:59.482 fused_ordering(395) 00:14:59.482 fused_ordering(396) 00:14:59.482 fused_ordering(397) 00:14:59.482 fused_ordering(398) 00:14:59.482 fused_ordering(399) 00:14:59.482 fused_ordering(400) 00:14:59.482 fused_ordering(401) 00:14:59.482 fused_ordering(402) 00:14:59.482 fused_ordering(403) 00:14:59.482 fused_ordering(404) 00:14:59.482 fused_ordering(405) 00:14:59.482 fused_ordering(406) 00:14:59.482 fused_ordering(407) 00:14:59.482 fused_ordering(408) 00:14:59.482 fused_ordering(409) 00:14:59.482 fused_ordering(410) 00:15:00.055 fused_ordering(411) 00:15:00.055 fused_ordering(412) 00:15:00.055 fused_ordering(413) 00:15:00.055 fused_ordering(414) 00:15:00.055 fused_ordering(415) 00:15:00.055 fused_ordering(416) 00:15:00.055 fused_ordering(417) 00:15:00.055 fused_ordering(418) 00:15:00.055 fused_ordering(419) 00:15:00.055 fused_ordering(420) 00:15:00.055 fused_ordering(421) 00:15:00.055 fused_ordering(422) 00:15:00.055 fused_ordering(423) 00:15:00.055 fused_ordering(424) 00:15:00.055 fused_ordering(425) 00:15:00.055 fused_ordering(426) 00:15:00.055 fused_ordering(427) 00:15:00.055 fused_ordering(428) 00:15:00.055 fused_ordering(429) 00:15:00.055 fused_ordering(430) 00:15:00.055 fused_ordering(431) 00:15:00.055 fused_ordering(432) 00:15:00.055 fused_ordering(433) 00:15:00.055 fused_ordering(434) 00:15:00.055 fused_ordering(435) 00:15:00.055 fused_ordering(436) 00:15:00.055 fused_ordering(437) 00:15:00.055 fused_ordering(438) 00:15:00.055 fused_ordering(439) 00:15:00.055 fused_ordering(440) 00:15:00.055 fused_ordering(441) 00:15:00.055 fused_ordering(442) 00:15:00.055 fused_ordering(443) 00:15:00.055 fused_ordering(444) 00:15:00.055 fused_ordering(445) 00:15:00.055 fused_ordering(446) 00:15:00.055 fused_ordering(447) 00:15:00.055 fused_ordering(448) 00:15:00.055 fused_ordering(449) 00:15:00.055 fused_ordering(450) 00:15:00.055 fused_ordering(451) 00:15:00.055 fused_ordering(452) 00:15:00.055 fused_ordering(453) 00:15:00.055 fused_ordering(454) 00:15:00.055 fused_ordering(455) 00:15:00.055 fused_ordering(456) 00:15:00.055 fused_ordering(457) 00:15:00.055 fused_ordering(458) 00:15:00.055 fused_ordering(459) 00:15:00.055 fused_ordering(460) 00:15:00.055 fused_ordering(461) 00:15:00.055 fused_ordering(462) 00:15:00.055 fused_ordering(463) 00:15:00.055 fused_ordering(464) 00:15:00.055 fused_ordering(465) 00:15:00.055 fused_ordering(466) 00:15:00.055 fused_ordering(467) 00:15:00.055 fused_ordering(468) 00:15:00.055 fused_ordering(469) 00:15:00.055 fused_ordering(470) 00:15:00.055 fused_ordering(471) 00:15:00.055 fused_ordering(472) 00:15:00.055 fused_ordering(473) 00:15:00.055 fused_ordering(474) 00:15:00.055 fused_ordering(475) 00:15:00.055 fused_ordering(476) 00:15:00.055 fused_ordering(477) 00:15:00.055 fused_ordering(478) 00:15:00.055 fused_ordering(479) 00:15:00.055 fused_ordering(480) 00:15:00.055 fused_ordering(481) 00:15:00.055 fused_ordering(482) 00:15:00.055 fused_ordering(483) 00:15:00.055 fused_ordering(484) 00:15:00.055 fused_ordering(485) 00:15:00.055 fused_ordering(486) 00:15:00.055 fused_ordering(487) 00:15:00.055 fused_ordering(488) 00:15:00.055 fused_ordering(489) 00:15:00.055 fused_ordering(490) 00:15:00.055 fused_ordering(491) 00:15:00.055 fused_ordering(492) 00:15:00.055 fused_ordering(493) 00:15:00.055 fused_ordering(494) 00:15:00.055 fused_ordering(495) 00:15:00.055 fused_ordering(496) 00:15:00.055 fused_ordering(497) 00:15:00.055 fused_ordering(498) 00:15:00.055 fused_ordering(499) 00:15:00.055 fused_ordering(500) 00:15:00.055 fused_ordering(501) 00:15:00.055 fused_ordering(502) 00:15:00.055 fused_ordering(503) 00:15:00.055 fused_ordering(504) 00:15:00.055 fused_ordering(505) 00:15:00.055 fused_ordering(506) 00:15:00.055 fused_ordering(507) 00:15:00.055 fused_ordering(508) 00:15:00.055 fused_ordering(509) 00:15:00.055 fused_ordering(510) 00:15:00.055 fused_ordering(511) 00:15:00.055 fused_ordering(512) 00:15:00.055 fused_ordering(513) 00:15:00.055 fused_ordering(514) 00:15:00.055 fused_ordering(515) 00:15:00.055 fused_ordering(516) 00:15:00.055 fused_ordering(517) 00:15:00.055 fused_ordering(518) 00:15:00.055 fused_ordering(519) 00:15:00.055 fused_ordering(520) 00:15:00.055 fused_ordering(521) 00:15:00.055 fused_ordering(522) 00:15:00.055 fused_ordering(523) 00:15:00.055 fused_ordering(524) 00:15:00.055 fused_ordering(525) 00:15:00.055 fused_ordering(526) 00:15:00.055 fused_ordering(527) 00:15:00.055 fused_ordering(528) 00:15:00.055 fused_ordering(529) 00:15:00.055 fused_ordering(530) 00:15:00.055 fused_ordering(531) 00:15:00.055 fused_ordering(532) 00:15:00.055 fused_ordering(533) 00:15:00.055 fused_ordering(534) 00:15:00.055 fused_ordering(535) 00:15:00.055 fused_ordering(536) 00:15:00.055 fused_ordering(537) 00:15:00.055 fused_ordering(538) 00:15:00.055 fused_ordering(539) 00:15:00.055 fused_ordering(540) 00:15:00.055 fused_ordering(541) 00:15:00.055 fused_ordering(542) 00:15:00.055 fused_ordering(543) 00:15:00.055 fused_ordering(544) 00:15:00.055 fused_ordering(545) 00:15:00.055 fused_ordering(546) 00:15:00.055 fused_ordering(547) 00:15:00.055 fused_ordering(548) 00:15:00.055 fused_ordering(549) 00:15:00.055 fused_ordering(550) 00:15:00.055 fused_ordering(551) 00:15:00.055 fused_ordering(552) 00:15:00.055 fused_ordering(553) 00:15:00.055 fused_ordering(554) 00:15:00.055 fused_ordering(555) 00:15:00.055 fused_ordering(556) 00:15:00.055 fused_ordering(557) 00:15:00.055 fused_ordering(558) 00:15:00.055 fused_ordering(559) 00:15:00.055 fused_ordering(560) 00:15:00.055 fused_ordering(561) 00:15:00.055 fused_ordering(562) 00:15:00.055 fused_ordering(563) 00:15:00.055 fused_ordering(564) 00:15:00.055 fused_ordering(565) 00:15:00.055 fused_ordering(566) 00:15:00.055 fused_ordering(567) 00:15:00.055 fused_ordering(568) 00:15:00.055 fused_ordering(569) 00:15:00.055 fused_ordering(570) 00:15:00.055 fused_ordering(571) 00:15:00.055 fused_ordering(572) 00:15:00.055 fused_ordering(573) 00:15:00.055 fused_ordering(574) 00:15:00.055 fused_ordering(575) 00:15:00.055 fused_ordering(576) 00:15:00.055 fused_ordering(577) 00:15:00.055 fused_ordering(578) 00:15:00.055 fused_ordering(579) 00:15:00.055 fused_ordering(580) 00:15:00.055 fused_ordering(581) 00:15:00.055 fused_ordering(582) 00:15:00.055 fused_ordering(583) 00:15:00.055 fused_ordering(584) 00:15:00.055 fused_ordering(585) 00:15:00.055 fused_ordering(586) 00:15:00.055 fused_ordering(587) 00:15:00.055 fused_ordering(588) 00:15:00.055 fused_ordering(589) 00:15:00.055 fused_ordering(590) 00:15:00.056 fused_ordering(591) 00:15:00.056 fused_ordering(592) 00:15:00.056 fused_ordering(593) 00:15:00.056 fused_ordering(594) 00:15:00.056 fused_ordering(595) 00:15:00.056 fused_ordering(596) 00:15:00.056 fused_ordering(597) 00:15:00.056 fused_ordering(598) 00:15:00.056 fused_ordering(599) 00:15:00.056 fused_ordering(600) 00:15:00.056 fused_ordering(601) 00:15:00.056 fused_ordering(602) 00:15:00.056 fused_ordering(603) 00:15:00.056 fused_ordering(604) 00:15:00.056 fused_ordering(605) 00:15:00.056 fused_ordering(606) 00:15:00.056 fused_ordering(607) 00:15:00.056 fused_ordering(608) 00:15:00.056 fused_ordering(609) 00:15:00.056 fused_ordering(610) 00:15:00.056 fused_ordering(611) 00:15:00.056 fused_ordering(612) 00:15:00.056 fused_ordering(613) 00:15:00.056 fused_ordering(614) 00:15:00.056 fused_ordering(615) 00:15:00.629 fused_ordering(616) 00:15:00.629 fused_ordering(617) 00:15:00.629 fused_ordering(618) 00:15:00.629 fused_ordering(619) 00:15:00.629 fused_ordering(620) 00:15:00.629 fused_ordering(621) 00:15:00.629 fused_ordering(622) 00:15:00.629 fused_ordering(623) 00:15:00.629 fused_ordering(624) 00:15:00.629 fused_ordering(625) 00:15:00.629 fused_ordering(626) 00:15:00.629 fused_ordering(627) 00:15:00.629 fused_ordering(628) 00:15:00.629 fused_ordering(629) 00:15:00.629 fused_ordering(630) 00:15:00.629 fused_ordering(631) 00:15:00.629 fused_ordering(632) 00:15:00.629 fused_ordering(633) 00:15:00.629 fused_ordering(634) 00:15:00.629 fused_ordering(635) 00:15:00.629 fused_ordering(636) 00:15:00.629 fused_ordering(637) 00:15:00.629 fused_ordering(638) 00:15:00.629 fused_ordering(639) 00:15:00.629 fused_ordering(640) 00:15:00.629 fused_ordering(641) 00:15:00.629 fused_ordering(642) 00:15:00.629 fused_ordering(643) 00:15:00.629 fused_ordering(644) 00:15:00.629 fused_ordering(645) 00:15:00.629 fused_ordering(646) 00:15:00.629 fused_ordering(647) 00:15:00.629 fused_ordering(648) 00:15:00.629 fused_ordering(649) 00:15:00.629 fused_ordering(650) 00:15:00.629 fused_ordering(651) 00:15:00.629 fused_ordering(652) 00:15:00.629 fused_ordering(653) 00:15:00.629 fused_ordering(654) 00:15:00.629 fused_ordering(655) 00:15:00.629 fused_ordering(656) 00:15:00.629 fused_ordering(657) 00:15:00.629 fused_ordering(658) 00:15:00.629 fused_ordering(659) 00:15:00.629 fused_ordering(660) 00:15:00.629 fused_ordering(661) 00:15:00.629 fused_ordering(662) 00:15:00.629 fused_ordering(663) 00:15:00.629 fused_ordering(664) 00:15:00.629 fused_ordering(665) 00:15:00.629 fused_ordering(666) 00:15:00.629 fused_ordering(667) 00:15:00.629 fused_ordering(668) 00:15:00.629 fused_ordering(669) 00:15:00.629 fused_ordering(670) 00:15:00.629 fused_ordering(671) 00:15:00.629 fused_ordering(672) 00:15:00.629 fused_ordering(673) 00:15:00.629 fused_ordering(674) 00:15:00.630 fused_ordering(675) 00:15:00.630 fused_ordering(676) 00:15:00.630 fused_ordering(677) 00:15:00.630 fused_ordering(678) 00:15:00.630 fused_ordering(679) 00:15:00.630 fused_ordering(680) 00:15:00.630 fused_ordering(681) 00:15:00.630 fused_ordering(682) 00:15:00.630 fused_ordering(683) 00:15:00.630 fused_ordering(684) 00:15:00.630 fused_ordering(685) 00:15:00.630 fused_ordering(686) 00:15:00.630 fused_ordering(687) 00:15:00.630 fused_ordering(688) 00:15:00.630 fused_ordering(689) 00:15:00.630 fused_ordering(690) 00:15:00.630 fused_ordering(691) 00:15:00.630 fused_ordering(692) 00:15:00.630 fused_ordering(693) 00:15:00.630 fused_ordering(694) 00:15:00.630 fused_ordering(695) 00:15:00.630 fused_ordering(696) 00:15:00.630 fused_ordering(697) 00:15:00.630 fused_ordering(698) 00:15:00.630 fused_ordering(699) 00:15:00.630 fused_ordering(700) 00:15:00.630 fused_ordering(701) 00:15:00.630 fused_ordering(702) 00:15:00.630 fused_ordering(703) 00:15:00.630 fused_ordering(704) 00:15:00.630 fused_ordering(705) 00:15:00.630 fused_ordering(706) 00:15:00.630 fused_ordering(707) 00:15:00.630 fused_ordering(708) 00:15:00.630 fused_ordering(709) 00:15:00.630 fused_ordering(710) 00:15:00.630 fused_ordering(711) 00:15:00.630 fused_ordering(712) 00:15:00.630 fused_ordering(713) 00:15:00.630 fused_ordering(714) 00:15:00.630 fused_ordering(715) 00:15:00.630 fused_ordering(716) 00:15:00.630 fused_ordering(717) 00:15:00.630 fused_ordering(718) 00:15:00.630 fused_ordering(719) 00:15:00.630 fused_ordering(720) 00:15:00.630 fused_ordering(721) 00:15:00.630 fused_ordering(722) 00:15:00.630 fused_ordering(723) 00:15:00.630 fused_ordering(724) 00:15:00.630 fused_ordering(725) 00:15:00.630 fused_ordering(726) 00:15:00.630 fused_ordering(727) 00:15:00.630 fused_ordering(728) 00:15:00.630 fused_ordering(729) 00:15:00.630 fused_ordering(730) 00:15:00.630 fused_ordering(731) 00:15:00.630 fused_ordering(732) 00:15:00.630 fused_ordering(733) 00:15:00.630 fused_ordering(734) 00:15:00.630 fused_ordering(735) 00:15:00.630 fused_ordering(736) 00:15:00.630 fused_ordering(737) 00:15:00.630 fused_ordering(738) 00:15:00.630 fused_ordering(739) 00:15:00.630 fused_ordering(740) 00:15:00.630 fused_ordering(741) 00:15:00.630 fused_ordering(742) 00:15:00.630 fused_ordering(743) 00:15:00.630 fused_ordering(744) 00:15:00.630 fused_ordering(745) 00:15:00.630 fused_ordering(746) 00:15:00.630 fused_ordering(747) 00:15:00.630 fused_ordering(748) 00:15:00.630 fused_ordering(749) 00:15:00.630 fused_ordering(750) 00:15:00.630 fused_ordering(751) 00:15:00.630 fused_ordering(752) 00:15:00.630 fused_ordering(753) 00:15:00.630 fused_ordering(754) 00:15:00.630 fused_ordering(755) 00:15:00.630 fused_ordering(756) 00:15:00.630 fused_ordering(757) 00:15:00.630 fused_ordering(758) 00:15:00.630 fused_ordering(759) 00:15:00.630 fused_ordering(760) 00:15:00.630 fused_ordering(761) 00:15:00.630 fused_ordering(762) 00:15:00.630 fused_ordering(763) 00:15:00.630 fused_ordering(764) 00:15:00.630 fused_ordering(765) 00:15:00.630 fused_ordering(766) 00:15:00.630 fused_ordering(767) 00:15:00.630 fused_ordering(768) 00:15:00.630 fused_ordering(769) 00:15:00.630 fused_ordering(770) 00:15:00.630 fused_ordering(771) 00:15:00.630 fused_ordering(772) 00:15:00.630 fused_ordering(773) 00:15:00.630 fused_ordering(774) 00:15:00.630 fused_ordering(775) 00:15:00.630 fused_ordering(776) 00:15:00.630 fused_ordering(777) 00:15:00.630 fused_ordering(778) 00:15:00.630 fused_ordering(779) 00:15:00.630 fused_ordering(780) 00:15:00.630 fused_ordering(781) 00:15:00.630 fused_ordering(782) 00:15:00.630 fused_ordering(783) 00:15:00.630 fused_ordering(784) 00:15:00.630 fused_ordering(785) 00:15:00.630 fused_ordering(786) 00:15:00.630 fused_ordering(787) 00:15:00.630 fused_ordering(788) 00:15:00.630 fused_ordering(789) 00:15:00.630 fused_ordering(790) 00:15:00.630 fused_ordering(791) 00:15:00.630 fused_ordering(792) 00:15:00.630 fused_ordering(793) 00:15:00.630 fused_ordering(794) 00:15:00.630 fused_ordering(795) 00:15:00.630 fused_ordering(796) 00:15:00.630 fused_ordering(797) 00:15:00.630 fused_ordering(798) 00:15:00.630 fused_ordering(799) 00:15:00.630 fused_ordering(800) 00:15:00.630 fused_ordering(801) 00:15:00.630 fused_ordering(802) 00:15:00.630 fused_ordering(803) 00:15:00.630 fused_ordering(804) 00:15:00.630 fused_ordering(805) 00:15:00.630 fused_ordering(806) 00:15:00.630 fused_ordering(807) 00:15:00.630 fused_ordering(808) 00:15:00.630 fused_ordering(809) 00:15:00.630 fused_ordering(810) 00:15:00.630 fused_ordering(811) 00:15:00.630 fused_ordering(812) 00:15:00.630 fused_ordering(813) 00:15:00.630 fused_ordering(814) 00:15:00.630 fused_ordering(815) 00:15:00.630 fused_ordering(816) 00:15:00.630 fused_ordering(817) 00:15:00.630 fused_ordering(818) 00:15:00.630 fused_ordering(819) 00:15:00.630 fused_ordering(820) 00:15:01.202 fused_ordering(821) 00:15:01.202 fused_ordering(822) 00:15:01.202 fused_ordering(823) 00:15:01.202 fused_ordering(824) 00:15:01.202 fused_ordering(825) 00:15:01.202 fused_ordering(826) 00:15:01.202 fused_ordering(827) 00:15:01.202 fused_ordering(828) 00:15:01.202 fused_ordering(829) 00:15:01.202 fused_ordering(830) 00:15:01.202 fused_ordering(831) 00:15:01.202 fused_ordering(832) 00:15:01.202 fused_ordering(833) 00:15:01.202 fused_ordering(834) 00:15:01.202 fused_ordering(835) 00:15:01.202 fused_ordering(836) 00:15:01.202 fused_ordering(837) 00:15:01.202 fused_ordering(838) 00:15:01.202 fused_ordering(839) 00:15:01.203 fused_ordering(840) 00:15:01.203 fused_ordering(841) 00:15:01.203 fused_ordering(842) 00:15:01.203 fused_ordering(843) 00:15:01.203 fused_ordering(844) 00:15:01.203 fused_ordering(845) 00:15:01.203 fused_ordering(846) 00:15:01.203 fused_ordering(847) 00:15:01.203 fused_ordering(848) 00:15:01.203 fused_ordering(849) 00:15:01.203 fused_ordering(850) 00:15:01.203 fused_ordering(851) 00:15:01.203 fused_ordering(852) 00:15:01.203 fused_ordering(853) 00:15:01.203 fused_ordering(854) 00:15:01.203 fused_ordering(855) 00:15:01.203 fused_ordering(856) 00:15:01.203 fused_ordering(857) 00:15:01.203 fused_ordering(858) 00:15:01.203 fused_ordering(859) 00:15:01.203 fused_ordering(860) 00:15:01.203 fused_ordering(861) 00:15:01.203 fused_ordering(862) 00:15:01.203 fused_ordering(863) 00:15:01.203 fused_ordering(864) 00:15:01.203 fused_ordering(865) 00:15:01.203 fused_ordering(866) 00:15:01.203 fused_ordering(867) 00:15:01.203 fused_ordering(868) 00:15:01.203 fused_ordering(869) 00:15:01.203 fused_ordering(870) 00:15:01.203 fused_ordering(871) 00:15:01.203 fused_ordering(872) 00:15:01.203 fused_ordering(873) 00:15:01.203 fused_ordering(874) 00:15:01.203 fused_ordering(875) 00:15:01.203 fused_ordering(876) 00:15:01.203 fused_ordering(877) 00:15:01.203 fused_ordering(878) 00:15:01.203 fused_ordering(879) 00:15:01.203 fused_ordering(880) 00:15:01.203 fused_ordering(881) 00:15:01.203 fused_ordering(882) 00:15:01.203 fused_ordering(883) 00:15:01.203 fused_ordering(884) 00:15:01.203 fused_ordering(885) 00:15:01.203 fused_ordering(886) 00:15:01.203 fused_ordering(887) 00:15:01.203 fused_ordering(888) 00:15:01.203 fused_ordering(889) 00:15:01.203 fused_ordering(890) 00:15:01.203 fused_ordering(891) 00:15:01.203 fused_ordering(892) 00:15:01.203 fused_ordering(893) 00:15:01.203 fused_ordering(894) 00:15:01.203 fused_ordering(895) 00:15:01.203 fused_ordering(896) 00:15:01.203 fused_ordering(897) 00:15:01.203 fused_ordering(898) 00:15:01.203 fused_ordering(899) 00:15:01.203 fused_ordering(900) 00:15:01.203 fused_ordering(901) 00:15:01.203 fused_ordering(902) 00:15:01.203 fused_ordering(903) 00:15:01.203 fused_ordering(904) 00:15:01.203 fused_ordering(905) 00:15:01.203 fused_ordering(906) 00:15:01.203 fused_ordering(907) 00:15:01.203 fused_ordering(908) 00:15:01.203 fused_ordering(909) 00:15:01.203 fused_ordering(910) 00:15:01.203 fused_ordering(911) 00:15:01.203 fused_ordering(912) 00:15:01.203 fused_ordering(913) 00:15:01.203 fused_ordering(914) 00:15:01.203 fused_ordering(915) 00:15:01.203 fused_ordering(916) 00:15:01.203 fused_ordering(917) 00:15:01.203 fused_ordering(918) 00:15:01.203 fused_ordering(919) 00:15:01.203 fused_ordering(920) 00:15:01.203 fused_ordering(921) 00:15:01.203 fused_ordering(922) 00:15:01.203 fused_ordering(923) 00:15:01.203 fused_ordering(924) 00:15:01.203 fused_ordering(925) 00:15:01.203 fused_ordering(926) 00:15:01.203 fused_ordering(927) 00:15:01.203 fused_ordering(928) 00:15:01.203 fused_ordering(929) 00:15:01.203 fused_ordering(930) 00:15:01.203 fused_ordering(931) 00:15:01.203 fused_ordering(932) 00:15:01.203 fused_ordering(933) 00:15:01.203 fused_ordering(934) 00:15:01.203 fused_ordering(935) 00:15:01.203 fused_ordering(936) 00:15:01.203 fused_ordering(937) 00:15:01.203 fused_ordering(938) 00:15:01.203 fused_ordering(939) 00:15:01.203 fused_ordering(940) 00:15:01.203 fused_ordering(941) 00:15:01.203 fused_ordering(942) 00:15:01.203 fused_ordering(943) 00:15:01.203 fused_ordering(944) 00:15:01.203 fused_ordering(945) 00:15:01.203 fused_ordering(946) 00:15:01.203 fused_ordering(947) 00:15:01.203 fused_ordering(948) 00:15:01.203 fused_ordering(949) 00:15:01.203 fused_ordering(950) 00:15:01.203 fused_ordering(951) 00:15:01.203 fused_ordering(952) 00:15:01.203 fused_ordering(953) 00:15:01.203 fused_ordering(954) 00:15:01.203 fused_ordering(955) 00:15:01.203 fused_ordering(956) 00:15:01.203 fused_ordering(957) 00:15:01.203 fused_ordering(958) 00:15:01.203 fused_ordering(959) 00:15:01.203 fused_ordering(960) 00:15:01.203 fused_ordering(961) 00:15:01.203 fused_ordering(962) 00:15:01.203 fused_ordering(963) 00:15:01.203 fused_ordering(964) 00:15:01.203 fused_ordering(965) 00:15:01.203 fused_ordering(966) 00:15:01.203 fused_ordering(967) 00:15:01.203 fused_ordering(968) 00:15:01.203 fused_ordering(969) 00:15:01.203 fused_ordering(970) 00:15:01.203 fused_ordering(971) 00:15:01.203 fused_ordering(972) 00:15:01.203 fused_ordering(973) 00:15:01.203 fused_ordering(974) 00:15:01.203 fused_ordering(975) 00:15:01.203 fused_ordering(976) 00:15:01.203 fused_ordering(977) 00:15:01.203 fused_ordering(978) 00:15:01.203 fused_ordering(979) 00:15:01.203 fused_ordering(980) 00:15:01.203 fused_ordering(981) 00:15:01.203 fused_ordering(982) 00:15:01.203 fused_ordering(983) 00:15:01.203 fused_ordering(984) 00:15:01.203 fused_ordering(985) 00:15:01.203 fused_ordering(986) 00:15:01.203 fused_ordering(987) 00:15:01.203 fused_ordering(988) 00:15:01.203 fused_ordering(989) 00:15:01.203 fused_ordering(990) 00:15:01.203 fused_ordering(991) 00:15:01.203 fused_ordering(992) 00:15:01.203 fused_ordering(993) 00:15:01.203 fused_ordering(994) 00:15:01.203 fused_ordering(995) 00:15:01.203 fused_ordering(996) 00:15:01.203 fused_ordering(997) 00:15:01.203 fused_ordering(998) 00:15:01.203 fused_ordering(999) 00:15:01.203 fused_ordering(1000) 00:15:01.203 fused_ordering(1001) 00:15:01.203 fused_ordering(1002) 00:15:01.203 fused_ordering(1003) 00:15:01.203 fused_ordering(1004) 00:15:01.203 fused_ordering(1005) 00:15:01.203 fused_ordering(1006) 00:15:01.203 fused_ordering(1007) 00:15:01.203 fused_ordering(1008) 00:15:01.203 fused_ordering(1009) 00:15:01.203 fused_ordering(1010) 00:15:01.203 fused_ordering(1011) 00:15:01.203 fused_ordering(1012) 00:15:01.203 fused_ordering(1013) 00:15:01.203 fused_ordering(1014) 00:15:01.203 fused_ordering(1015) 00:15:01.203 fused_ordering(1016) 00:15:01.203 fused_ordering(1017) 00:15:01.203 fused_ordering(1018) 00:15:01.203 fused_ordering(1019) 00:15:01.203 fused_ordering(1020) 00:15:01.203 fused_ordering(1021) 00:15:01.203 fused_ordering(1022) 00:15:01.203 fused_ordering(1023) 00:15:01.203 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:01.203 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:01.203 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:01.203 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:15:01.203 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:01.203 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:15:01.203 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:01.203 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:01.203 rmmod nvme_tcp 00:15:01.203 rmmod nvme_fabrics 00:15:01.203 rmmod nvme_keyring 00:15:01.203 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:01.203 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:15:01.203 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:15:01.203 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1217915 ']' 00:15:01.203 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1217915 00:15:01.203 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1217915 ']' 00:15:01.203 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1217915 00:15:01.203 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:15:01.203 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:01.203 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1217915 00:15:01.464 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:01.464 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:01.464 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1217915' 00:15:01.464 killing process with pid 1217915 00:15:01.464 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1217915 00:15:01.464 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1217915 00:15:01.464 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:01.464 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:01.464 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:01.464 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:15:01.464 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:15:01.464 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:01.464 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:15:01.464 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:01.464 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:01.464 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.464 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:01.464 16:09:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:04.011 00:15:04.011 real 0m13.688s 00:15:04.011 user 0m7.280s 00:15:04.011 sys 0m7.449s 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:04.011 ************************************ 00:15:04.011 END TEST nvmf_fused_ordering 00:15:04.011 ************************************ 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:04.011 ************************************ 00:15:04.011 START TEST nvmf_ns_masking 00:15:04.011 ************************************ 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:04.011 * Looking for test storage... 00:15:04.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:04.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.011 --rc genhtml_branch_coverage=1 00:15:04.011 --rc genhtml_function_coverage=1 00:15:04.011 --rc genhtml_legend=1 00:15:04.011 --rc geninfo_all_blocks=1 00:15:04.011 --rc geninfo_unexecuted_blocks=1 00:15:04.011 00:15:04.011 ' 00:15:04.011 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:04.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.011 --rc genhtml_branch_coverage=1 00:15:04.011 --rc genhtml_function_coverage=1 00:15:04.011 --rc genhtml_legend=1 00:15:04.011 --rc geninfo_all_blocks=1 00:15:04.011 --rc geninfo_unexecuted_blocks=1 00:15:04.011 00:15:04.011 ' 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:04.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.012 --rc genhtml_branch_coverage=1 00:15:04.012 --rc genhtml_function_coverage=1 00:15:04.012 --rc genhtml_legend=1 00:15:04.012 --rc geninfo_all_blocks=1 00:15:04.012 --rc geninfo_unexecuted_blocks=1 00:15:04.012 00:15:04.012 ' 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:04.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.012 --rc genhtml_branch_coverage=1 00:15:04.012 --rc genhtml_function_coverage=1 00:15:04.012 --rc genhtml_legend=1 00:15:04.012 --rc geninfo_all_blocks=1 00:15:04.012 --rc geninfo_unexecuted_blocks=1 00:15:04.012 00:15:04.012 ' 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:04.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=4f8fc5fa-1c32-4def-9054-8991f3cfbc54 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=8b74f3d7-bd3d-4f92-9b3b-c49de0b68537 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=4de804a1-3ea3-4d6d-b810-ee0d4c3b7ec1 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:04.012 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:15:04.013 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:12.298 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:12.299 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:12.299 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:12.299 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:12.299 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:12.299 16:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:12.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:15:12.299 00:15:12.299 --- 10.0.0.2 ping statistics --- 00:15:12.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.299 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:12.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:15:12.299 00:15:12.299 --- 10.0.0.1 ping statistics --- 00:15:12.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.299 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1222773 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1222773 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1222773 ']' 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.299 16:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:12.299 [2024-11-20 16:09:47.343583] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:15:12.300 [2024-11-20 16:09:47.343654] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.300 [2024-11-20 16:09:47.443811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.300 [2024-11-20 16:09:47.494446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.300 [2024-11-20 16:09:47.494495] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.300 [2024-11-20 16:09:47.494504] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.300 [2024-11-20 16:09:47.494511] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.300 [2024-11-20 16:09:47.494518] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.300 [2024-11-20 16:09:47.495276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.300 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.300 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:12.300 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:12.300 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:12.300 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:12.300 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.300 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:12.562 [2024-11-20 16:09:48.365846] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.562 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:12.562 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:12.562 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:12.824 Malloc1 00:15:12.824 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:13.084 Malloc2 00:15:13.084 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:13.084 16:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:13.345 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:13.606 [2024-11-20 16:09:49.330421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.606 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:13.606 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4de804a1-3ea3-4d6d-b810-ee0d4c3b7ec1 -a 10.0.0.2 -s 4420 -i 4 00:15:13.866 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:13.866 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:13.866 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:13.866 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:13.866 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:15.781 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:15.781 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:15.781 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:15.781 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:15.781 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:15.781 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:15.781 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:15.781 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:15.781 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:15.782 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:15.782 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:15.782 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:15.782 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:15.782 [ 0]:0x1 00:15:15.782 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:15.782 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:15.782 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1c6e9a2bf70f4ec8aa260009b1e5f997 00:15:15.782 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1c6e9a2bf70f4ec8aa260009b1e5f997 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:15.782 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:16.043 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:16.043 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:16.043 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:16.043 [ 0]:0x1 00:15:16.043 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:16.043 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:16.043 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1c6e9a2bf70f4ec8aa260009b1e5f997 00:15:16.043 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1c6e9a2bf70f4ec8aa260009b1e5f997 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:16.043 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:16.043 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:16.043 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:16.043 [ 1]:0x2 00:15:16.043 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:16.043 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:16.303 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f1e3e1dffd648d78752909bec73c40b 00:15:16.303 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f1e3e1dffd648d78752909bec73c40b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:16.303 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:16.303 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:16.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.303 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:16.563 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:16.563 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:16.563 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4de804a1-3ea3-4d6d-b810-ee0d4c3b7ec1 -a 10.0.0.2 -s 4420 -i 4 00:15:16.824 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:16.824 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:16.824 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:16.824 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:15:16.824 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:15:16.824 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:18.739 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:18.739 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:18.739 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:18.739 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:18.739 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:18.739 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:18.739 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:18.739 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:19.000 [ 0]:0x2 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f1e3e1dffd648d78752909bec73c40b 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f1e3e1dffd648d78752909bec73c40b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:19.000 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:19.260 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:19.260 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:19.260 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:19.260 [ 0]:0x1 00:15:19.260 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:19.260 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:19.260 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1c6e9a2bf70f4ec8aa260009b1e5f997 00:15:19.260 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1c6e9a2bf70f4ec8aa260009b1e5f997 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:19.260 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:19.260 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:19.260 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:19.260 [ 1]:0x2 00:15:19.260 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:19.260 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:19.260 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f1e3e1dffd648d78752909bec73c40b 00:15:19.260 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f1e3e1dffd648d78752909bec73c40b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:19.260 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:19.521 [ 0]:0x2 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f1e3e1dffd648d78752909bec73c40b 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f1e3e1dffd648d78752909bec73c40b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:19.521 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:19.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.782 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:19.782 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:19.782 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4de804a1-3ea3-4d6d-b810-ee0d4c3b7ec1 -a 10.0.0.2 -s 4420 -i 4 00:15:20.043 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:20.043 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:20.043 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:20.043 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:20.043 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:20.043 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:22.589 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:22.589 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:22.589 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:22.589 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:22.589 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:22.589 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:22.589 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:22.589 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:22.589 [ 0]:0x1 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1c6e9a2bf70f4ec8aa260009b1e5f997 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1c6e9a2bf70f4ec8aa260009b1e5f997 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:22.589 [ 1]:0x2 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f1e3e1dffd648d78752909bec73c40b 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f1e3e1dffd648d78752909bec73c40b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:22.589 [ 0]:0x2 00:15:22.589 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:22.590 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:22.850 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f1e3e1dffd648d78752909bec73c40b 00:15:22.850 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f1e3e1dffd648d78752909bec73c40b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:22.850 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:22.850 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:22.850 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:22.850 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:22.850 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:22.851 [2024-11-20 16:09:58.711960] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:22.851 request: 00:15:22.851 { 00:15:22.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:22.851 "nsid": 2, 00:15:22.851 "host": "nqn.2016-06.io.spdk:host1", 00:15:22.851 "method": "nvmf_ns_remove_host", 00:15:22.851 "req_id": 1 00:15:22.851 } 00:15:22.851 Got JSON-RPC error response 00:15:22.851 response: 00:15:22.851 { 00:15:22.851 "code": -32602, 00:15:22.851 "message": "Invalid parameters" 00:15:22.851 } 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:22.851 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:23.111 [ 0]:0x2 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f1e3e1dffd648d78752909bec73c40b 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f1e3e1dffd648d78752909bec73c40b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:23.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1225282 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1225282 /var/tmp/host.sock 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1225282 ']' 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:23.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.111 16:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:23.111 [2024-11-20 16:09:58.975329] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:15:23.111 [2024-11-20 16:09:58.975380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225282 ] 00:15:23.372 [2024-11-20 16:09:59.063880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.372 [2024-11-20 16:09:59.099647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.943 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.944 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:23.944 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:24.204 16:09:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:24.204 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 4f8fc5fa-1c32-4def-9054-8991f3cfbc54 00:15:24.204 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:24.204 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4F8FC5FA1C324DEF90548991F3CFBC54 -i 00:15:24.465 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 8b74f3d7-bd3d-4f92-9b3b-c49de0b68537 00:15:24.465 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:24.465 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 8B74F3D7BD3D4F929B3BC49DE0B68537 -i 00:15:24.726 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:24.986 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:24.987 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:24.987 16:10:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:25.247 nvme0n1 00:15:25.247 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:25.247 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:25.818 nvme1n2 00:15:25.818 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:25.818 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:25.818 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:25.818 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:25.818 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:25.818 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:25.818 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:25.818 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:25.818 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:26.078 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 4f8fc5fa-1c32-4def-9054-8991f3cfbc54 == \4\f\8\f\c\5\f\a\-\1\c\3\2\-\4\d\e\f\-\9\0\5\4\-\8\9\9\1\f\3\c\f\b\c\5\4 ]] 00:15:26.078 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:26.078 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:26.078 16:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:26.338 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 8b74f3d7-bd3d-4f92-9b3b-c49de0b68537 == \8\b\7\4\f\3\d\7\-\b\d\3\d\-\4\f\9\2\-\9\b\3\b\-\c\4\9\d\e\0\b\6\8\5\3\7 ]] 00:15:26.338 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:26.598 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:26.599 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 4f8fc5fa-1c32-4def-9054-8991f3cfbc54 00:15:26.599 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:26.599 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4F8FC5FA1C324DEF90548991F3CFBC54 00:15:26.599 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:26.599 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4F8FC5FA1C324DEF90548991F3CFBC54 00:15:26.599 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.599 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:26.599 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.599 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:26.599 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.599 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:26.599 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.599 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:26.599 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4F8FC5FA1C324DEF90548991F3CFBC54 00:15:26.859 [2024-11-20 16:10:02.638387] bdev.c:8418:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:26.860 [2024-11-20 16:10:02.638415] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:26.860 [2024-11-20 16:10:02.638422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.860 request: 00:15:26.860 { 00:15:26.860 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:26.860 "namespace": { 00:15:26.860 "bdev_name": "invalid", 00:15:26.860 "nsid": 1, 00:15:26.860 "nguid": "4F8FC5FA1C324DEF90548991F3CFBC54", 00:15:26.860 "no_auto_visible": false 00:15:26.860 }, 00:15:26.860 "method": "nvmf_subsystem_add_ns", 00:15:26.860 "req_id": 1 00:15:26.860 } 00:15:26.860 Got JSON-RPC error response 00:15:26.860 response: 00:15:26.860 { 00:15:26.860 "code": -32602, 00:15:26.860 "message": "Invalid parameters" 00:15:26.860 } 00:15:26.860 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:26.860 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:26.860 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:26.860 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:26.860 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 4f8fc5fa-1c32-4def-9054-8991f3cfbc54 00:15:26.860 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:26.860 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4F8FC5FA1C324DEF90548991F3CFBC54 -i 00:15:27.120 16:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:29.033 16:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:29.033 16:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:29.033 16:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:29.294 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:29.294 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1225282 00:15:29.294 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1225282 ']' 00:15:29.294 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1225282 00:15:29.294 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:29.294 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.294 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1225282 00:15:29.294 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:29.294 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:29.294 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1225282' 00:15:29.294 killing process with pid 1225282 00:15:29.294 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1225282 00:15:29.294 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1225282 00:15:29.554 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.554 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:29.554 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:29.554 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:29.554 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:29.815 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:29.815 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:29.815 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:29.815 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:29.815 rmmod nvme_tcp 00:15:29.815 rmmod nvme_fabrics 00:15:29.815 rmmod nvme_keyring 00:15:29.815 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:29.815 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:29.815 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:29.815 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1222773 ']' 00:15:29.815 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1222773 00:15:29.815 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1222773 ']' 00:15:29.815 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1222773 00:15:29.815 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:29.815 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.815 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1222773 00:15:29.815 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:29.815 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:29.815 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1222773' 00:15:29.815 killing process with pid 1222773 00:15:29.815 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1222773 00:15:29.815 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1222773 00:15:30.075 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:30.075 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:30.075 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:30.075 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:30.075 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:30.075 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:30.075 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:30.075 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:30.075 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:30.075 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.075 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:30.075 16:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.987 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:31.987 00:15:31.987 real 0m28.344s 00:15:31.987 user 0m32.280s 00:15:31.987 sys 0m8.337s 00:15:31.987 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.987 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:31.987 ************************************ 00:15:31.987 END TEST nvmf_ns_masking 00:15:31.987 ************************************ 00:15:31.987 16:10:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:31.987 16:10:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:31.987 16:10:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:31.987 16:10:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.987 16:10:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:31.987 ************************************ 00:15:31.987 START TEST nvmf_nvme_cli 00:15:31.987 ************************************ 00:15:31.987 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:32.248 * Looking for test storage... 00:15:32.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:32.248 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:32.248 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:32.248 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:32.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.249 --rc genhtml_branch_coverage=1 00:15:32.249 --rc genhtml_function_coverage=1 00:15:32.249 --rc genhtml_legend=1 00:15:32.249 --rc geninfo_all_blocks=1 00:15:32.249 --rc geninfo_unexecuted_blocks=1 00:15:32.249 00:15:32.249 ' 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:32.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.249 --rc genhtml_branch_coverage=1 00:15:32.249 --rc genhtml_function_coverage=1 00:15:32.249 --rc genhtml_legend=1 00:15:32.249 --rc geninfo_all_blocks=1 00:15:32.249 --rc geninfo_unexecuted_blocks=1 00:15:32.249 00:15:32.249 ' 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:32.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.249 --rc genhtml_branch_coverage=1 00:15:32.249 --rc genhtml_function_coverage=1 00:15:32.249 --rc genhtml_legend=1 00:15:32.249 --rc geninfo_all_blocks=1 00:15:32.249 --rc geninfo_unexecuted_blocks=1 00:15:32.249 00:15:32.249 ' 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:32.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.249 --rc genhtml_branch_coverage=1 00:15:32.249 --rc genhtml_function_coverage=1 00:15:32.249 --rc genhtml_legend=1 00:15:32.249 --rc geninfo_all_blocks=1 00:15:32.249 --rc geninfo_unexecuted_blocks=1 00:15:32.249 00:15:32.249 ' 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:32.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:32.249 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:32.250 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.250 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:32.250 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:32.250 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:32.250 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.250 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:32.250 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.250 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:32.250 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:32.250 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:32.250 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:40.393 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:40.393 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:40.393 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:40.393 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:40.393 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:40.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:15:40.394 00:15:40.394 --- 10.0.0.2 ping statistics --- 00:15:40.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.394 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:40.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:15:40.394 00:15:40.394 --- 10.0.0.1 ping statistics --- 00:15:40.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.394 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1230696 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1230696 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1230696 ']' 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:40.394 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:40.394 [2024-11-20 16:10:15.689956] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:15:40.394 [2024-11-20 16:10:15.690023] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.394 [2024-11-20 16:10:15.790778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:40.394 [2024-11-20 16:10:15.845786] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.394 [2024-11-20 16:10:15.845840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.394 [2024-11-20 16:10:15.845848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.394 [2024-11-20 16:10:15.845856] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.394 [2024-11-20 16:10:15.845862] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.394 [2024-11-20 16:10:15.847940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.394 [2024-11-20 16:10:15.848099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.394 [2024-11-20 16:10:15.848230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:40.394 [2024-11-20 16:10:15.848260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.655 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.655 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:15:40.655 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:40.655 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:40.655 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:40.655 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.655 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:40.655 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.655 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:40.656 [2024-11-20 16:10:16.556698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.656 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.656 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:40.656 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.656 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:40.917 Malloc0 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:40.917 Malloc1 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:40.917 [2024-11-20 16:10:16.668668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:15:40.917 00:15:40.917 Discovery Log Number of Records 2, Generation counter 2 00:15:40.917 =====Discovery Log Entry 0====== 00:15:40.917 trtype: tcp 00:15:40.917 adrfam: ipv4 00:15:40.917 subtype: current discovery subsystem 00:15:40.917 treq: not required 00:15:40.917 portid: 0 00:15:40.917 trsvcid: 4420 00:15:40.917 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:40.917 traddr: 10.0.0.2 00:15:40.917 eflags: explicit discovery connections, duplicate discovery information 00:15:40.917 sectype: none 00:15:40.917 =====Discovery Log Entry 1====== 00:15:40.917 trtype: tcp 00:15:40.917 adrfam: ipv4 00:15:40.917 subtype: nvme subsystem 00:15:40.917 treq: not required 00:15:40.917 portid: 0 00:15:40.917 trsvcid: 4420 00:15:40.917 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:40.917 traddr: 10.0.0.2 00:15:40.917 eflags: none 00:15:40.917 sectype: none 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:40.917 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:42.839 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:42.839 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:15:42.839 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:42.839 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:42.839 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:42.839 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:44.753 /dev/nvme0n2 ]] 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:44.753 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:45.015 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:45.015 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:45.015 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:45.015 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:45.015 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:45.015 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:45.015 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:45.015 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:45.015 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:45.015 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:45.015 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:45.015 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:45.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.276 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:45.276 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:15:45.276 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:45.276 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:45.276 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:45.276 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:45.276 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:15:45.276 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:45.276 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:45.276 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.276 16:10:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:45.276 rmmod nvme_tcp 00:15:45.276 rmmod nvme_fabrics 00:15:45.276 rmmod nvme_keyring 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1230696 ']' 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1230696 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1230696 ']' 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1230696 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1230696 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1230696' 00:15:45.276 killing process with pid 1230696 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1230696 00:15:45.276 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1230696 00:15:45.537 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:45.537 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:45.537 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:45.537 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:15:45.537 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:15:45.537 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:45.537 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:15:45.537 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:45.537 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:45.537 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.537 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:45.537 16:10:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.451 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:47.451 00:15:47.451 real 0m15.449s 00:15:47.451 user 0m23.879s 00:15:47.451 sys 0m6.412s 00:15:47.451 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.451 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:47.451 ************************************ 00:15:47.451 END TEST nvmf_nvme_cli 00:15:47.451 ************************************ 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:47.712 ************************************ 00:15:47.712 START TEST nvmf_vfio_user 00:15:47.712 ************************************ 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:47.712 * Looking for test storage... 00:15:47.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:47.712 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:47.974 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:47.974 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:47.974 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:47.974 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:47.974 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:47.974 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:47.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.974 --rc genhtml_branch_coverage=1 00:15:47.974 --rc genhtml_function_coverage=1 00:15:47.974 --rc genhtml_legend=1 00:15:47.974 --rc geninfo_all_blocks=1 00:15:47.974 --rc geninfo_unexecuted_blocks=1 00:15:47.974 00:15:47.974 ' 00:15:47.974 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:47.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.974 --rc genhtml_branch_coverage=1 00:15:47.974 --rc genhtml_function_coverage=1 00:15:47.974 --rc genhtml_legend=1 00:15:47.974 --rc geninfo_all_blocks=1 00:15:47.974 --rc geninfo_unexecuted_blocks=1 00:15:47.974 00:15:47.974 ' 00:15:47.974 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:47.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.975 --rc genhtml_branch_coverage=1 00:15:47.975 --rc genhtml_function_coverage=1 00:15:47.975 --rc genhtml_legend=1 00:15:47.975 --rc geninfo_all_blocks=1 00:15:47.975 --rc geninfo_unexecuted_blocks=1 00:15:47.975 00:15:47.975 ' 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:47.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.975 --rc genhtml_branch_coverage=1 00:15:47.975 --rc genhtml_function_coverage=1 00:15:47.975 --rc genhtml_legend=1 00:15:47.975 --rc geninfo_all_blocks=1 00:15:47.975 --rc geninfo_unexecuted_blocks=1 00:15:47.975 00:15:47.975 ' 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:47.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1232490 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1232490' 00:15:47.975 Process pid: 1232490 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1232490 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1232490 ']' 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.975 16:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:47.975 [2024-11-20 16:10:23.741617] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:15:47.975 [2024-11-20 16:10:23.741668] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.975 [2024-11-20 16:10:23.823857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:47.975 [2024-11-20 16:10:23.854620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.975 [2024-11-20 16:10:23.854652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.976 [2024-11-20 16:10:23.854657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.976 [2024-11-20 16:10:23.854662] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.976 [2024-11-20 16:10:23.854666] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.976 [2024-11-20 16:10:23.855929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.976 [2024-11-20 16:10:23.856076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:47.976 [2024-11-20 16:10:23.856228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:47.976 [2024-11-20 16:10:23.856401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.918 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.918 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:48.918 16:10:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:49.862 16:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:49.862 16:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:49.862 16:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:49.862 16:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:49.862 16:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:49.862 16:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:50.122 Malloc1 00:15:50.122 16:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:50.383 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:50.383 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:50.644 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:50.644 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:50.644 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:50.905 Malloc2 00:15:50.905 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:51.166 16:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:51.166 16:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:51.428 16:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:51.428 16:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:51.428 16:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:51.428 16:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:51.428 16:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:51.428 16:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:51.428 [2024-11-20 16:10:27.252134] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:15:51.428 [2024-11-20 16:10:27.252187] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233182 ] 00:15:51.428 [2024-11-20 16:10:27.290464] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:51.428 [2024-11-20 16:10:27.295716] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:51.428 [2024-11-20 16:10:27.295733] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb384e1e000 00:15:51.428 [2024-11-20 16:10:27.296715] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:51.428 [2024-11-20 16:10:27.297715] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:51.428 [2024-11-20 16:10:27.298720] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:51.428 [2024-11-20 16:10:27.299731] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:51.428 [2024-11-20 16:10:27.300735] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:51.428 [2024-11-20 16:10:27.301738] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:51.428 [2024-11-20 16:10:27.302735] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:51.428 [2024-11-20 16:10:27.303745] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:51.428 [2024-11-20 16:10:27.304758] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:51.428 [2024-11-20 16:10:27.304765] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb384e13000 00:15:51.428 [2024-11-20 16:10:27.305677] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:51.428 [2024-11-20 16:10:27.315116] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:51.428 [2024-11-20 16:10:27.315146] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:51.428 [2024-11-20 16:10:27.320835] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:51.428 [2024-11-20 16:10:27.320871] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:51.429 [2024-11-20 16:10:27.320930] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:51.429 [2024-11-20 16:10:27.320944] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:51.429 [2024-11-20 16:10:27.320948] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:51.429 [2024-11-20 16:10:27.321839] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:51.429 [2024-11-20 16:10:27.321846] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:51.429 [2024-11-20 16:10:27.321851] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:51.429 [2024-11-20 16:10:27.322845] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:51.429 [2024-11-20 16:10:27.322851] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:51.429 [2024-11-20 16:10:27.322856] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:51.429 [2024-11-20 16:10:27.323847] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:51.429 [2024-11-20 16:10:27.323853] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:51.429 [2024-11-20 16:10:27.324858] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:51.429 [2024-11-20 16:10:27.324864] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:51.429 [2024-11-20 16:10:27.324867] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:51.429 [2024-11-20 16:10:27.324872] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:51.429 [2024-11-20 16:10:27.324978] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:51.429 [2024-11-20 16:10:27.324981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:51.429 [2024-11-20 16:10:27.324985] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:51.429 [2024-11-20 16:10:27.325868] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:51.429 [2024-11-20 16:10:27.326869] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:51.429 [2024-11-20 16:10:27.327874] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:51.429 [2024-11-20 16:10:27.328871] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:51.429 [2024-11-20 16:10:27.328930] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:51.429 [2024-11-20 16:10:27.329880] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:51.429 [2024-11-20 16:10:27.329885] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:51.429 [2024-11-20 16:10:27.329889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:51.429 [2024-11-20 16:10:27.329904] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:51.429 [2024-11-20 16:10:27.329909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:51.429 [2024-11-20 16:10:27.329921] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:51.429 [2024-11-20 16:10:27.329924] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:51.429 [2024-11-20 16:10:27.329927] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:51.429 [2024-11-20 16:10:27.329938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:51.429 [2024-11-20 16:10:27.329975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:51.429 [2024-11-20 16:10:27.329984] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:51.429 [2024-11-20 16:10:27.329988] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:51.429 [2024-11-20 16:10:27.329991] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:51.429 [2024-11-20 16:10:27.329994] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:51.429 [2024-11-20 16:10:27.329999] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:51.429 [2024-11-20 16:10:27.330002] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:51.429 [2024-11-20 16:10:27.330006] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:51.429 [2024-11-20 16:10:27.330014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:51.429 [2024-11-20 16:10:27.330021] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:51.429 [2024-11-20 16:10:27.330031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:51.429 [2024-11-20 16:10:27.330039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.429 [2024-11-20 16:10:27.330045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.429 [2024-11-20 16:10:27.330051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.429 [2024-11-20 16:10:27.330057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.429 [2024-11-20 16:10:27.330062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:51.429 [2024-11-20 16:10:27.330067] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:51.429 [2024-11-20 16:10:27.330073] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:51.429 [2024-11-20 16:10:27.330081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:51.429 [2024-11-20 16:10:27.330087] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:51.429 [2024-11-20 16:10:27.330091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:51.429 [2024-11-20 16:10:27.330096] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:51.429 [2024-11-20 16:10:27.330100] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:51.429 [2024-11-20 16:10:27.330107] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:51.429 [2024-11-20 16:10:27.330117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:51.429 [2024-11-20 16:10:27.330162] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:51.429 [2024-11-20 16:10:27.330168] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:51.429 [2024-11-20 16:10:27.330174] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:51.429 [2024-11-20 16:10:27.330177] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:51.429 [2024-11-20 16:10:27.330179] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:51.429 [2024-11-20 16:10:27.330183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:51.429 [2024-11-20 16:10:27.330193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:51.429 [2024-11-20 16:10:27.330200] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:51.429 [2024-11-20 16:10:27.330208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:51.429 [2024-11-20 16:10:27.330214] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:51.429 [2024-11-20 16:10:27.330218] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:51.429 [2024-11-20 16:10:27.330221] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:51.429 [2024-11-20 16:10:27.330224] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:51.429 [2024-11-20 16:10:27.330228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:51.429 [2024-11-20 16:10:27.330244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:51.429 [2024-11-20 16:10:27.330254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:51.429 [2024-11-20 16:10:27.330260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:51.429 [2024-11-20 16:10:27.330265] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:51.429 [2024-11-20 16:10:27.330268] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:51.429 [2024-11-20 16:10:27.330270] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:51.429 [2024-11-20 16:10:27.330274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:51.430 [2024-11-20 16:10:27.330283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:51.430 [2024-11-20 16:10:27.330289] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:51.430 [2024-11-20 16:10:27.330293] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:51.430 [2024-11-20 16:10:27.330299] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:51.430 [2024-11-20 16:10:27.330304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:51.430 [2024-11-20 16:10:27.330307] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:51.430 [2024-11-20 16:10:27.330311] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:51.430 [2024-11-20 16:10:27.330315] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:51.430 [2024-11-20 16:10:27.330318] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:51.430 [2024-11-20 16:10:27.330321] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:51.430 [2024-11-20 16:10:27.330336] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:51.430 [2024-11-20 16:10:27.330347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:51.430 [2024-11-20 16:10:27.330355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:51.430 [2024-11-20 16:10:27.330364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:51.430 [2024-11-20 16:10:27.330372] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:51.430 [2024-11-20 16:10:27.330380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:51.430 [2024-11-20 16:10:27.330388] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:51.430 [2024-11-20 16:10:27.330397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:51.430 [2024-11-20 16:10:27.330406] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:51.430 [2024-11-20 16:10:27.330411] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:51.430 [2024-11-20 16:10:27.330413] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:51.430 [2024-11-20 16:10:27.330416] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:51.430 [2024-11-20 16:10:27.330418] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:51.430 [2024-11-20 16:10:27.330423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:51.430 [2024-11-20 16:10:27.330428] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:51.430 [2024-11-20 16:10:27.330431] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:51.430 [2024-11-20 16:10:27.330433] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:51.430 [2024-11-20 16:10:27.330437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:51.430 [2024-11-20 16:10:27.330442] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:51.430 [2024-11-20 16:10:27.330445] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:51.430 [2024-11-20 16:10:27.330448] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:51.430 [2024-11-20 16:10:27.330452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:51.430 [2024-11-20 16:10:27.330457] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:51.430 [2024-11-20 16:10:27.330460] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:51.430 [2024-11-20 16:10:27.330463] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:51.430 [2024-11-20 16:10:27.330467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:51.430 [2024-11-20 16:10:27.330472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:51.430 [2024-11-20 16:10:27.330480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:51.430 [2024-11-20 16:10:27.330488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:51.430 [2024-11-20 16:10:27.330493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:51.430 ===================================================== 00:15:51.430 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:51.430 ===================================================== 00:15:51.430 Controller Capabilities/Features 00:15:51.430 ================================ 00:15:51.430 Vendor ID: 4e58 00:15:51.430 Subsystem Vendor ID: 4e58 00:15:51.430 Serial Number: SPDK1 00:15:51.430 Model Number: SPDK bdev Controller 00:15:51.430 Firmware Version: 25.01 00:15:51.430 Recommended Arb Burst: 6 00:15:51.430 IEEE OUI Identifier: 8d 6b 50 00:15:51.430 Multi-path I/O 00:15:51.430 May have multiple subsystem ports: Yes 00:15:51.430 May have multiple controllers: Yes 00:15:51.430 Associated with SR-IOV VF: No 00:15:51.430 Max Data Transfer Size: 131072 00:15:51.430 Max Number of Namespaces: 32 00:15:51.430 Max Number of I/O Queues: 127 00:15:51.430 NVMe Specification Version (VS): 1.3 00:15:51.430 NVMe Specification Version (Identify): 1.3 00:15:51.430 Maximum Queue Entries: 256 00:15:51.430 Contiguous Queues Required: Yes 00:15:51.430 Arbitration Mechanisms Supported 00:15:51.430 Weighted Round Robin: Not Supported 00:15:51.430 Vendor Specific: Not Supported 00:15:51.430 Reset Timeout: 15000 ms 00:15:51.430 Doorbell Stride: 4 bytes 00:15:51.430 NVM Subsystem Reset: Not Supported 00:15:51.430 Command Sets Supported 00:15:51.430 NVM Command Set: Supported 00:15:51.430 Boot Partition: Not Supported 00:15:51.430 Memory Page Size Minimum: 4096 bytes 00:15:51.430 Memory Page Size Maximum: 4096 bytes 00:15:51.430 Persistent Memory Region: Not Supported 00:15:51.430 Optional Asynchronous Events Supported 00:15:51.430 Namespace Attribute Notices: Supported 00:15:51.430 Firmware Activation Notices: Not Supported 00:15:51.430 ANA Change Notices: Not Supported 00:15:51.430 PLE Aggregate Log Change Notices: Not Supported 00:15:51.430 LBA Status Info Alert Notices: Not Supported 00:15:51.430 EGE Aggregate Log Change Notices: Not Supported 00:15:51.430 Normal NVM Subsystem Shutdown event: Not Supported 00:15:51.430 Zone Descriptor Change Notices: Not Supported 00:15:51.430 Discovery Log Change Notices: Not Supported 00:15:51.430 Controller Attributes 00:15:51.430 128-bit Host Identifier: Supported 00:15:51.430 Non-Operational Permissive Mode: Not Supported 00:15:51.430 NVM Sets: Not Supported 00:15:51.430 Read Recovery Levels: Not Supported 00:15:51.430 Endurance Groups: Not Supported 00:15:51.430 Predictable Latency Mode: Not Supported 00:15:51.430 Traffic Based Keep ALive: Not Supported 00:15:51.430 Namespace Granularity: Not Supported 00:15:51.430 SQ Associations: Not Supported 00:15:51.430 UUID List: Not Supported 00:15:51.430 Multi-Domain Subsystem: Not Supported 00:15:51.430 Fixed Capacity Management: Not Supported 00:15:51.430 Variable Capacity Management: Not Supported 00:15:51.430 Delete Endurance Group: Not Supported 00:15:51.430 Delete NVM Set: Not Supported 00:15:51.430 Extended LBA Formats Supported: Not Supported 00:15:51.430 Flexible Data Placement Supported: Not Supported 00:15:51.430 00:15:51.430 Controller Memory Buffer Support 00:15:51.430 ================================ 00:15:51.430 Supported: No 00:15:51.430 00:15:51.430 Persistent Memory Region Support 00:15:51.430 ================================ 00:15:51.430 Supported: No 00:15:51.430 00:15:51.430 Admin Command Set Attributes 00:15:51.430 ============================ 00:15:51.430 Security Send/Receive: Not Supported 00:15:51.430 Format NVM: Not Supported 00:15:51.430 Firmware Activate/Download: Not Supported 00:15:51.430 Namespace Management: Not Supported 00:15:51.430 Device Self-Test: Not Supported 00:15:51.430 Directives: Not Supported 00:15:51.430 NVMe-MI: Not Supported 00:15:51.430 Virtualization Management: Not Supported 00:15:51.430 Doorbell Buffer Config: Not Supported 00:15:51.430 Get LBA Status Capability: Not Supported 00:15:51.430 Command & Feature Lockdown Capability: Not Supported 00:15:51.430 Abort Command Limit: 4 00:15:51.430 Async Event Request Limit: 4 00:15:51.430 Number of Firmware Slots: N/A 00:15:51.430 Firmware Slot 1 Read-Only: N/A 00:15:51.430 Firmware Activation Without Reset: N/A 00:15:51.430 Multiple Update Detection Support: N/A 00:15:51.430 Firmware Update Granularity: No Information Provided 00:15:51.430 Per-Namespace SMART Log: No 00:15:51.430 Asymmetric Namespace Access Log Page: Not Supported 00:15:51.430 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:51.430 Command Effects Log Page: Supported 00:15:51.430 Get Log Page Extended Data: Supported 00:15:51.430 Telemetry Log Pages: Not Supported 00:15:51.430 Persistent Event Log Pages: Not Supported 00:15:51.430 Supported Log Pages Log Page: May Support 00:15:51.431 Commands Supported & Effects Log Page: Not Supported 00:15:51.431 Feature Identifiers & Effects Log Page:May Support 00:15:51.431 NVMe-MI Commands & Effects Log Page: May Support 00:15:51.431 Data Area 4 for Telemetry Log: Not Supported 00:15:51.431 Error Log Page Entries Supported: 128 00:15:51.431 Keep Alive: Supported 00:15:51.431 Keep Alive Granularity: 10000 ms 00:15:51.431 00:15:51.431 NVM Command Set Attributes 00:15:51.431 ========================== 00:15:51.431 Submission Queue Entry Size 00:15:51.431 Max: 64 00:15:51.431 Min: 64 00:15:51.431 Completion Queue Entry Size 00:15:51.431 Max: 16 00:15:51.431 Min: 16 00:15:51.431 Number of Namespaces: 32 00:15:51.431 Compare Command: Supported 00:15:51.431 Write Uncorrectable Command: Not Supported 00:15:51.431 Dataset Management Command: Supported 00:15:51.431 Write Zeroes Command: Supported 00:15:51.431 Set Features Save Field: Not Supported 00:15:51.431 Reservations: Not Supported 00:15:51.431 Timestamp: Not Supported 00:15:51.431 Copy: Supported 00:15:51.431 Volatile Write Cache: Present 00:15:51.431 Atomic Write Unit (Normal): 1 00:15:51.431 Atomic Write Unit (PFail): 1 00:15:51.431 Atomic Compare & Write Unit: 1 00:15:51.431 Fused Compare & Write: Supported 00:15:51.431 Scatter-Gather List 00:15:51.431 SGL Command Set: Supported (Dword aligned) 00:15:51.431 SGL Keyed: Not Supported 00:15:51.431 SGL Bit Bucket Descriptor: Not Supported 00:15:51.431 SGL Metadata Pointer: Not Supported 00:15:51.431 Oversized SGL: Not Supported 00:15:51.431 SGL Metadata Address: Not Supported 00:15:51.431 SGL Offset: Not Supported 00:15:51.431 Transport SGL Data Block: Not Supported 00:15:51.431 Replay Protected Memory Block: Not Supported 00:15:51.431 00:15:51.431 Firmware Slot Information 00:15:51.431 ========================= 00:15:51.431 Active slot: 1 00:15:51.431 Slot 1 Firmware Revision: 25.01 00:15:51.431 00:15:51.431 00:15:51.431 Commands Supported and Effects 00:15:51.431 ============================== 00:15:51.431 Admin Commands 00:15:51.431 -------------- 00:15:51.431 Get Log Page (02h): Supported 00:15:51.431 Identify (06h): Supported 00:15:51.431 Abort (08h): Supported 00:15:51.431 Set Features (09h): Supported 00:15:51.431 Get Features (0Ah): Supported 00:15:51.431 Asynchronous Event Request (0Ch): Supported 00:15:51.431 Keep Alive (18h): Supported 00:15:51.431 I/O Commands 00:15:51.431 ------------ 00:15:51.431 Flush (00h): Supported LBA-Change 00:15:51.431 Write (01h): Supported LBA-Change 00:15:51.431 Read (02h): Supported 00:15:51.431 Compare (05h): Supported 00:15:51.431 Write Zeroes (08h): Supported LBA-Change 00:15:51.431 Dataset Management (09h): Supported LBA-Change 00:15:51.431 Copy (19h): Supported LBA-Change 00:15:51.431 00:15:51.431 Error Log 00:15:51.431 ========= 00:15:51.431 00:15:51.431 Arbitration 00:15:51.431 =========== 00:15:51.431 Arbitration Burst: 1 00:15:51.431 00:15:51.431 Power Management 00:15:51.431 ================ 00:15:51.431 Number of Power States: 1 00:15:51.431 Current Power State: Power State #0 00:15:51.431 Power State #0: 00:15:51.431 Max Power: 0.00 W 00:15:51.431 Non-Operational State: Operational 00:15:51.431 Entry Latency: Not Reported 00:15:51.431 Exit Latency: Not Reported 00:15:51.431 Relative Read Throughput: 0 00:15:51.431 Relative Read Latency: 0 00:15:51.431 Relative Write Throughput: 0 00:15:51.431 Relative Write Latency: 0 00:15:51.431 Idle Power: Not Reported 00:15:51.431 Active Power: Not Reported 00:15:51.431 Non-Operational Permissive Mode: Not Supported 00:15:51.431 00:15:51.431 Health Information 00:15:51.431 ================== 00:15:51.431 Critical Warnings: 00:15:51.431 Available Spare Space: OK 00:15:51.431 Temperature: OK 00:15:51.431 Device Reliability: OK 00:15:51.431 Read Only: No 00:15:51.431 Volatile Memory Backup: OK 00:15:51.431 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:51.431 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:51.431 Available Spare: 0% 00:15:51.431 Available Sp[2024-11-20 16:10:27.330565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:51.431 [2024-11-20 16:10:27.330572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:51.431 [2024-11-20 16:10:27.330591] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:51.431 [2024-11-20 16:10:27.330599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.431 [2024-11-20 16:10:27.330603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.431 [2024-11-20 16:10:27.330608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.431 [2024-11-20 16:10:27.330612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.431 [2024-11-20 16:10:27.334163] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:51.431 [2024-11-20 16:10:27.334173] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:51.431 [2024-11-20 16:10:27.334911] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:51.431 [2024-11-20 16:10:27.334950] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:51.431 [2024-11-20 16:10:27.334954] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:51.431 [2024-11-20 16:10:27.335922] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:51.431 [2024-11-20 16:10:27.335931] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:51.431 [2024-11-20 16:10:27.335982] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:51.431 [2024-11-20 16:10:27.336947] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:51.691 are Threshold: 0% 00:15:51.691 Life Percentage Used: 0% 00:15:51.691 Data Units Read: 0 00:15:51.691 Data Units Written: 0 00:15:51.691 Host Read Commands: 0 00:15:51.691 Host Write Commands: 0 00:15:51.691 Controller Busy Time: 0 minutes 00:15:51.691 Power Cycles: 0 00:15:51.691 Power On Hours: 0 hours 00:15:51.691 Unsafe Shutdowns: 0 00:15:51.691 Unrecoverable Media Errors: 0 00:15:51.691 Lifetime Error Log Entries: 0 00:15:51.691 Warning Temperature Time: 0 minutes 00:15:51.691 Critical Temperature Time: 0 minutes 00:15:51.691 00:15:51.691 Number of Queues 00:15:51.691 ================ 00:15:51.691 Number of I/O Submission Queues: 127 00:15:51.691 Number of I/O Completion Queues: 127 00:15:51.691 00:15:51.691 Active Namespaces 00:15:51.691 ================= 00:15:51.691 Namespace ID:1 00:15:51.691 Error Recovery Timeout: Unlimited 00:15:51.691 Command Set Identifier: NVM (00h) 00:15:51.692 Deallocate: Supported 00:15:51.692 Deallocated/Unwritten Error: Not Supported 00:15:51.692 Deallocated Read Value: Unknown 00:15:51.692 Deallocate in Write Zeroes: Not Supported 00:15:51.692 Deallocated Guard Field: 0xFFFF 00:15:51.692 Flush: Supported 00:15:51.692 Reservation: Supported 00:15:51.692 Namespace Sharing Capabilities: Multiple Controllers 00:15:51.692 Size (in LBAs): 131072 (0GiB) 00:15:51.692 Capacity (in LBAs): 131072 (0GiB) 00:15:51.692 Utilization (in LBAs): 131072 (0GiB) 00:15:51.692 NGUID: C80636EF8DE24AF5BE2657C7CA3B6744 00:15:51.692 UUID: c80636ef-8de2-4af5-be26-57c7ca3b6744 00:15:51.692 Thin Provisioning: Not Supported 00:15:51.692 Per-NS Atomic Units: Yes 00:15:51.692 Atomic Boundary Size (Normal): 0 00:15:51.692 Atomic Boundary Size (PFail): 0 00:15:51.692 Atomic Boundary Offset: 0 00:15:51.692 Maximum Single Source Range Length: 65535 00:15:51.692 Maximum Copy Length: 65535 00:15:51.692 Maximum Source Range Count: 1 00:15:51.692 NGUID/EUI64 Never Reused: No 00:15:51.692 Namespace Write Protected: No 00:15:51.692 Number of LBA Formats: 1 00:15:51.692 Current LBA Format: LBA Format #00 00:15:51.692 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:51.692 00:15:51.692 16:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:51.692 [2024-11-20 16:10:27.522829] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:56.980 Initializing NVMe Controllers 00:15:56.980 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:56.981 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:56.981 Initialization complete. Launching workers. 00:15:56.981 ======================================================== 00:15:56.981 Latency(us) 00:15:56.981 Device Information : IOPS MiB/s Average min max 00:15:56.981 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40007.40 156.28 3199.27 851.67 10769.38 00:15:56.981 ======================================================== 00:15:56.981 Total : 40007.40 156.28 3199.27 851.67 10769.38 00:15:56.981 00:15:56.981 [2024-11-20 16:10:32.539695] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:56.981 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:56.981 [2024-11-20 16:10:32.727549] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:02.269 Initializing NVMe Controllers 00:16:02.269 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:02.269 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:02.269 Initialization complete. Launching workers. 00:16:02.269 ======================================================== 00:16:02.269 Latency(us) 00:16:02.269 Device Information : IOPS MiB/s Average min max 00:16:02.269 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16003.24 62.51 7997.84 5339.05 14880.92 00:16:02.270 ======================================================== 00:16:02.270 Total : 16003.24 62.51 7997.84 5339.05 14880.92 00:16:02.270 00:16:02.270 [2024-11-20 16:10:37.760266] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:02.270 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:02.270 [2024-11-20 16:10:37.961077] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:07.559 [2024-11-20 16:10:43.032365] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:07.559 Initializing NVMe Controllers 00:16:07.559 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:07.559 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:07.559 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:07.559 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:07.559 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:07.559 Initialization complete. Launching workers. 00:16:07.559 Starting thread on core 2 00:16:07.559 Starting thread on core 3 00:16:07.559 Starting thread on core 1 00:16:07.559 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:07.559 [2024-11-20 16:10:43.277312] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:10.862 [2024-11-20 16:10:46.340474] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:10.862 Initializing NVMe Controllers 00:16:10.862 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:10.862 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:10.862 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:10.862 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:10.862 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:10.862 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:10.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:10.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:10.862 Initialization complete. Launching workers. 00:16:10.862 Starting thread on core 1 with urgent priority queue 00:16:10.862 Starting thread on core 2 with urgent priority queue 00:16:10.862 Starting thread on core 3 with urgent priority queue 00:16:10.862 Starting thread on core 0 with urgent priority queue 00:16:10.862 SPDK bdev Controller (SPDK1 ) core 0: 12372.67 IO/s 8.08 secs/100000 ios 00:16:10.862 SPDK bdev Controller (SPDK1 ) core 1: 11231.00 IO/s 8.90 secs/100000 ios 00:16:10.862 SPDK bdev Controller (SPDK1 ) core 2: 10326.00 IO/s 9.68 secs/100000 ios 00:16:10.862 SPDK bdev Controller (SPDK1 ) core 3: 8819.33 IO/s 11.34 secs/100000 ios 00:16:10.862 ======================================================== 00:16:10.862 00:16:10.862 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:10.862 [2024-11-20 16:10:46.577573] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:10.862 Initializing NVMe Controllers 00:16:10.862 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:10.862 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:10.863 Namespace ID: 1 size: 0GB 00:16:10.863 Initialization complete. 00:16:10.863 INFO: using host memory buffer for IO 00:16:10.863 Hello world! 00:16:10.863 [2024-11-20 16:10:46.614794] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:10.863 16:10:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:11.124 [2024-11-20 16:10:46.850511] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:12.067 Initializing NVMe Controllers 00:16:12.067 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:12.067 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:12.067 Initialization complete. Launching workers. 00:16:12.067 submit (in ns) avg, min, max = 5939.2, 2815.0, 3999450.8 00:16:12.067 complete (in ns) avg, min, max = 17774.7, 1636.7, 3998951.7 00:16:12.067 00:16:12.067 Submit histogram 00:16:12.067 ================ 00:16:12.067 Range in us Cumulative Count 00:16:12.067 2.813 - 2.827: 0.3421% ( 69) 00:16:12.067 2.827 - 2.840: 1.4972% ( 233) 00:16:12.067 2.840 - 2.853: 3.6637% ( 437) 00:16:12.067 2.853 - 2.867: 7.6298% ( 800) 00:16:12.067 2.867 - 2.880: 12.7064% ( 1024) 00:16:12.067 2.880 - 2.893: 18.7150% ( 1212) 00:16:12.067 2.893 - 2.907: 25.1748% ( 1303) 00:16:12.067 2.907 - 2.920: 31.0991% ( 1195) 00:16:12.067 2.920 - 2.933: 37.3060% ( 1252) 00:16:12.067 2.933 - 2.947: 42.2934% ( 1006) 00:16:12.067 2.947 - 2.960: 46.7999% ( 909) 00:16:12.067 2.960 - 2.973: 53.1803% ( 1287) 00:16:12.067 2.973 - 2.987: 60.3639% ( 1449) 00:16:12.067 2.987 - 3.000: 68.3952% ( 1620) 00:16:12.067 3.000 - 3.013: 76.9074% ( 1717) 00:16:12.067 3.013 - 3.027: 84.1208% ( 1455) 00:16:12.067 3.027 - 3.040: 90.2732% ( 1241) 00:16:12.067 3.040 - 3.053: 94.5813% ( 869) 00:16:12.067 3.053 - 3.067: 97.2832% ( 545) 00:16:12.067 3.067 - 3.080: 98.4731% ( 240) 00:16:12.067 3.080 - 3.093: 99.0828% ( 123) 00:16:12.067 3.093 - 3.107: 99.3605% ( 56) 00:16:12.067 3.107 - 3.120: 99.5042% ( 29) 00:16:12.067 3.120 - 3.133: 99.5389% ( 7) 00:16:12.067 3.133 - 3.147: 99.5885% ( 10) 00:16:12.067 3.147 - 3.160: 99.6034% ( 3) 00:16:12.068 3.160 - 3.173: 99.6133% ( 2) 00:16:12.068 3.267 - 3.280: 99.6183% ( 1) 00:16:12.068 3.347 - 3.360: 99.6232% ( 1) 00:16:12.068 3.493 - 3.520: 99.6282% ( 1) 00:16:12.068 3.520 - 3.547: 99.6331% ( 1) 00:16:12.068 3.573 - 3.600: 99.6381% ( 1) 00:16:12.068 3.680 - 3.707: 99.6480% ( 2) 00:16:12.068 3.760 - 3.787: 99.6530% ( 1) 00:16:12.068 3.920 - 3.947: 99.6579% ( 1) 00:16:12.068 3.973 - 4.000: 99.6629% ( 1) 00:16:12.068 4.507 - 4.533: 99.6678% ( 1) 00:16:12.068 4.533 - 4.560: 99.6728% ( 1) 00:16:12.068 4.640 - 4.667: 99.6778% ( 1) 00:16:12.068 4.853 - 4.880: 99.6827% ( 1) 00:16:12.068 4.907 - 4.933: 99.6877% ( 1) 00:16:12.068 4.933 - 4.960: 99.7125% ( 5) 00:16:12.068 4.960 - 4.987: 99.7174% ( 1) 00:16:12.068 4.987 - 5.013: 99.7224% ( 1) 00:16:12.068 5.040 - 5.067: 99.7273% ( 1) 00:16:12.068 5.093 - 5.120: 99.7323% ( 1) 00:16:12.068 5.200 - 5.227: 99.7372% ( 1) 00:16:12.068 5.413 - 5.440: 99.7472% ( 2) 00:16:12.068 5.520 - 5.547: 99.7521% ( 1) 00:16:12.068 5.600 - 5.627: 99.7571% ( 1) 00:16:12.068 5.707 - 5.733: 99.7620% ( 1) 00:16:12.068 5.813 - 5.840: 99.7670% ( 1) 00:16:12.068 6.027 - 6.053: 99.7769% ( 2) 00:16:12.068 6.080 - 6.107: 99.7868% ( 2) 00:16:12.068 6.187 - 6.213: 99.7918% ( 1) 00:16:12.068 6.240 - 6.267: 99.7967% ( 1) 00:16:12.068 6.320 - 6.347: 99.8017% ( 1) 00:16:12.068 6.400 - 6.427: 99.8067% ( 1) 00:16:12.068 6.427 - 6.453: 99.8166% ( 2) 00:16:12.068 6.480 - 6.507: 99.8215% ( 1) 00:16:12.068 6.560 - 6.587: 99.8265% ( 1) 00:16:12.068 6.587 - 6.613: 99.8314% ( 1) 00:16:12.068 6.613 - 6.640: 99.8364% ( 1) 00:16:12.068 6.720 - 6.747: 99.8414% ( 1) 00:16:12.068 6.773 - 6.800: 99.8463% ( 1) 00:16:12.068 6.827 - 6.880: 99.8513% ( 1) 00:16:12.068 6.933 - 6.987: 99.8562% ( 1) 00:16:12.068 7.093 - 7.147: 99.8612% ( 1) 00:16:12.068 7.147 - 7.200: 99.8661% ( 1) 00:16:12.068 7.307 - 7.360: 99.8761% ( 2) 00:16:12.068 7.360 - 7.413: 99.8810% ( 1) 00:16:12.068 7.413 - 7.467: 99.8909% ( 2) 00:16:12.068 7.467 - 7.520: 99.8959% ( 1) 00:16:12.068 7.680 - 7.733: 99.9008% ( 1) 00:16:12.068 7.787 - 7.840: 99.9058% ( 1) 00:16:12.068 7.947 - 8.000: 99.9108% ( 1) 00:16:12.068 8.053 - 8.107: 99.9157% ( 1) 00:16:12.068 8.267 - 8.320: 99.9207% ( 1) 00:16:12.068 14.293 - 14.400: 99.9256% ( 1) 00:16:12.068 [2024-11-20 16:10:47.872131] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:12.068 3986.773 - 4014.080: 100.0000% ( 15) 00:16:12.068 00:16:12.068 Complete histogram 00:16:12.068 ================== 00:16:12.068 Range in us Cumulative Count 00:16:12.068 1.633 - 1.640: 0.0843% ( 17) 00:16:12.068 1.640 - 1.647: 0.6990% ( 124) 00:16:12.068 1.647 - 1.653: 0.7932% ( 19) 00:16:12.068 1.653 - 1.660: 0.8725% ( 16) 00:16:12.068 1.660 - 1.667: 0.9866% ( 23) 00:16:12.068 1.667 - 1.673: 1.0510% ( 13) 00:16:12.068 1.673 - 1.680: 1.0609% ( 2) 00:16:12.068 1.680 - 1.687: 1.0758% ( 3) 00:16:12.068 1.687 - 1.693: 1.0956% ( 4) 00:16:12.068 1.693 - 1.700: 4.1446% ( 615) 00:16:12.068 1.700 - 1.707: 35.7791% ( 6381) 00:16:12.068 1.707 - 1.720: 59.5855% ( 4802) 00:16:12.068 1.720 - 1.733: 74.8054% ( 3070) 00:16:12.068 1.733 - 1.747: 81.9741% ( 1446) 00:16:12.068 1.747 - 1.760: 83.7787% ( 364) 00:16:12.068 1.760 - 1.773: 88.6867% ( 990) 00:16:12.068 1.773 - 1.787: 94.2839% ( 1129) 00:16:12.068 1.787 - 1.800: 97.3526% ( 619) 00:16:12.068 1.800 - 1.813: 98.8052% ( 293) 00:16:12.068 1.813 - 1.827: 99.3010% ( 100) 00:16:12.068 1.827 - 1.840: 99.3803% ( 16) 00:16:12.068 1.840 - 1.853: 99.4001% ( 4) 00:16:12.068 1.893 - 1.907: 99.4051% ( 1) 00:16:12.068 3.493 - 3.520: 99.4100% ( 1) 00:16:12.068 3.573 - 3.600: 99.4150% ( 1) 00:16:12.068 3.600 - 3.627: 99.4200% ( 1) 00:16:12.068 3.840 - 3.867: 99.4249% ( 1) 00:16:12.068 3.947 - 3.973: 99.4299% ( 1) 00:16:12.068 4.187 - 4.213: 99.4348% ( 1) 00:16:12.068 4.560 - 4.587: 99.4398% ( 1) 00:16:12.068 4.747 - 4.773: 99.4447% ( 1) 00:16:12.068 4.880 - 4.907: 99.4497% ( 1) 00:16:12.068 4.987 - 5.013: 99.4596% ( 2) 00:16:12.068 5.147 - 5.173: 99.4695% ( 2) 00:16:12.068 5.173 - 5.200: 99.4745% ( 1) 00:16:12.068 5.200 - 5.227: 99.4844% ( 2) 00:16:12.068 5.227 - 5.253: 99.4894% ( 1) 00:16:12.068 5.307 - 5.333: 99.4943% ( 1) 00:16:12.068 5.547 - 5.573: 99.4993% ( 1) 00:16:12.068 5.627 - 5.653: 99.5042% ( 1) 00:16:12.068 5.680 - 5.707: 99.5092% ( 1) 00:16:12.068 5.707 - 5.733: 99.5142% ( 1) 00:16:12.068 5.733 - 5.760: 99.5241% ( 2) 00:16:12.068 5.947 - 5.973: 99.5340% ( 2) 00:16:12.068 6.080 - 6.107: 99.5439% ( 2) 00:16:12.068 6.107 - 6.133: 99.5489% ( 1) 00:16:12.068 6.187 - 6.213: 99.5538% ( 1) 00:16:12.068 6.533 - 6.560: 99.5588% ( 1) 00:16:12.068 6.667 - 6.693: 99.5687% ( 2) 00:16:12.068 6.693 - 6.720: 99.5736% ( 1) 00:16:12.068 6.880 - 6.933: 99.5786% ( 1) 00:16:12.068 10.293 - 10.347: 99.5836% ( 1) 00:16:12.068 10.827 - 10.880: 99.5885% ( 1) 00:16:12.068 12.373 - 12.427: 99.5935% ( 1) 00:16:12.068 12.907 - 12.960: 99.5984% ( 1) 00:16:12.068 3986.773 - 4014.080: 100.0000% ( 81) 00:16:12.068 00:16:12.068 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:12.068 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:12.068 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:12.068 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:12.068 16:10:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:12.330 [ 00:16:12.330 { 00:16:12.330 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:12.330 "subtype": "Discovery", 00:16:12.330 "listen_addresses": [], 00:16:12.330 "allow_any_host": true, 00:16:12.330 "hosts": [] 00:16:12.330 }, 00:16:12.330 { 00:16:12.330 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:12.330 "subtype": "NVMe", 00:16:12.330 "listen_addresses": [ 00:16:12.330 { 00:16:12.330 "trtype": "VFIOUSER", 00:16:12.330 "adrfam": "IPv4", 00:16:12.330 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:12.330 "trsvcid": "0" 00:16:12.330 } 00:16:12.330 ], 00:16:12.330 "allow_any_host": true, 00:16:12.330 "hosts": [], 00:16:12.330 "serial_number": "SPDK1", 00:16:12.330 "model_number": "SPDK bdev Controller", 00:16:12.330 "max_namespaces": 32, 00:16:12.330 "min_cntlid": 1, 00:16:12.330 "max_cntlid": 65519, 00:16:12.330 "namespaces": [ 00:16:12.330 { 00:16:12.330 "nsid": 1, 00:16:12.330 "bdev_name": "Malloc1", 00:16:12.330 "name": "Malloc1", 00:16:12.330 "nguid": "C80636EF8DE24AF5BE2657C7CA3B6744", 00:16:12.330 "uuid": "c80636ef-8de2-4af5-be26-57c7ca3b6744" 00:16:12.330 } 00:16:12.330 ] 00:16:12.330 }, 00:16:12.330 { 00:16:12.330 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:12.330 "subtype": "NVMe", 00:16:12.330 "listen_addresses": [ 00:16:12.330 { 00:16:12.330 "trtype": "VFIOUSER", 00:16:12.330 "adrfam": "IPv4", 00:16:12.330 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:12.330 "trsvcid": "0" 00:16:12.330 } 00:16:12.330 ], 00:16:12.330 "allow_any_host": true, 00:16:12.330 "hosts": [], 00:16:12.330 "serial_number": "SPDK2", 00:16:12.330 "model_number": "SPDK bdev Controller", 00:16:12.330 "max_namespaces": 32, 00:16:12.330 "min_cntlid": 1, 00:16:12.330 "max_cntlid": 65519, 00:16:12.330 "namespaces": [ 00:16:12.330 { 00:16:12.330 "nsid": 1, 00:16:12.330 "bdev_name": "Malloc2", 00:16:12.330 "name": "Malloc2", 00:16:12.330 "nguid": "A88BC5EAF95E48CB83F04C61BE85F195", 00:16:12.330 "uuid": "a88bc5ea-f95e-48cb-83f0-4c61be85f195" 00:16:12.330 } 00:16:12.330 ] 00:16:12.330 } 00:16:12.330 ] 00:16:12.330 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:12.330 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1237208 00:16:12.330 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:12.330 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:12.330 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:12.330 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:12.330 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:12.330 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:12.330 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:12.330 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:12.330 [2024-11-20 16:10:48.257535] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:12.591 Malloc3 00:16:12.591 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:12.591 [2024-11-20 16:10:48.445898] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:12.591 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:12.591 Asynchronous Event Request test 00:16:12.591 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:12.591 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:12.591 Registering asynchronous event callbacks... 00:16:12.591 Starting namespace attribute notice tests for all controllers... 00:16:12.591 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:12.591 aer_cb - Changed Namespace 00:16:12.591 Cleaning up... 00:16:12.853 [ 00:16:12.853 { 00:16:12.853 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:12.853 "subtype": "Discovery", 00:16:12.853 "listen_addresses": [], 00:16:12.853 "allow_any_host": true, 00:16:12.853 "hosts": [] 00:16:12.854 }, 00:16:12.854 { 00:16:12.854 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:12.854 "subtype": "NVMe", 00:16:12.854 "listen_addresses": [ 00:16:12.854 { 00:16:12.854 "trtype": "VFIOUSER", 00:16:12.854 "adrfam": "IPv4", 00:16:12.854 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:12.854 "trsvcid": "0" 00:16:12.854 } 00:16:12.854 ], 00:16:12.854 "allow_any_host": true, 00:16:12.854 "hosts": [], 00:16:12.854 "serial_number": "SPDK1", 00:16:12.854 "model_number": "SPDK bdev Controller", 00:16:12.854 "max_namespaces": 32, 00:16:12.854 "min_cntlid": 1, 00:16:12.854 "max_cntlid": 65519, 00:16:12.854 "namespaces": [ 00:16:12.854 { 00:16:12.854 "nsid": 1, 00:16:12.854 "bdev_name": "Malloc1", 00:16:12.854 "name": "Malloc1", 00:16:12.854 "nguid": "C80636EF8DE24AF5BE2657C7CA3B6744", 00:16:12.854 "uuid": "c80636ef-8de2-4af5-be26-57c7ca3b6744" 00:16:12.854 }, 00:16:12.854 { 00:16:12.854 "nsid": 2, 00:16:12.854 "bdev_name": "Malloc3", 00:16:12.854 "name": "Malloc3", 00:16:12.854 "nguid": "BC2E4ED3121846B3B7DFADEFF9CAC5F4", 00:16:12.854 "uuid": "bc2e4ed3-1218-46b3-b7df-adeff9cac5f4" 00:16:12.854 } 00:16:12.854 ] 00:16:12.854 }, 00:16:12.854 { 00:16:12.854 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:12.854 "subtype": "NVMe", 00:16:12.854 "listen_addresses": [ 00:16:12.854 { 00:16:12.854 "trtype": "VFIOUSER", 00:16:12.854 "adrfam": "IPv4", 00:16:12.854 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:12.854 "trsvcid": "0" 00:16:12.854 } 00:16:12.854 ], 00:16:12.854 "allow_any_host": true, 00:16:12.854 "hosts": [], 00:16:12.854 "serial_number": "SPDK2", 00:16:12.854 "model_number": "SPDK bdev Controller", 00:16:12.854 "max_namespaces": 32, 00:16:12.854 "min_cntlid": 1, 00:16:12.854 "max_cntlid": 65519, 00:16:12.854 "namespaces": [ 00:16:12.854 { 00:16:12.854 "nsid": 1, 00:16:12.854 "bdev_name": "Malloc2", 00:16:12.854 "name": "Malloc2", 00:16:12.854 "nguid": "A88BC5EAF95E48CB83F04C61BE85F195", 00:16:12.854 "uuid": "a88bc5ea-f95e-48cb-83f0-4c61be85f195" 00:16:12.854 } 00:16:12.854 ] 00:16:12.854 } 00:16:12.854 ] 00:16:12.854 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1237208 00:16:12.854 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:12.854 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:12.854 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:12.854 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:12.854 [2024-11-20 16:10:48.674913] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:16:12.854 [2024-11-20 16:10:48.674957] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1237233 ] 00:16:12.854 [2024-11-20 16:10:48.712393] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:12.854 [2024-11-20 16:10:48.721342] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:12.854 [2024-11-20 16:10:48.721361] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa8012f9000 00:16:12.854 [2024-11-20 16:10:48.722340] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:12.854 [2024-11-20 16:10:48.723345] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:12.854 [2024-11-20 16:10:48.724351] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:12.854 [2024-11-20 16:10:48.725354] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:12.854 [2024-11-20 16:10:48.726365] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:12.854 [2024-11-20 16:10:48.727370] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:12.854 [2024-11-20 16:10:48.728381] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:12.854 [2024-11-20 16:10:48.729385] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:12.854 [2024-11-20 16:10:48.730392] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:12.854 [2024-11-20 16:10:48.730400] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa8012ee000 00:16:12.854 [2024-11-20 16:10:48.731312] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:12.854 [2024-11-20 16:10:48.740692] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:12.854 [2024-11-20 16:10:48.740711] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:16:12.854 [2024-11-20 16:10:48.745773] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:12.854 [2024-11-20 16:10:48.745810] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:12.854 [2024-11-20 16:10:48.745867] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:16:12.854 [2024-11-20 16:10:48.745878] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:16:12.854 [2024-11-20 16:10:48.745882] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:16:12.854 [2024-11-20 16:10:48.746779] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:12.854 [2024-11-20 16:10:48.746787] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:16:12.854 [2024-11-20 16:10:48.746795] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:16:12.854 [2024-11-20 16:10:48.747781] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:12.854 [2024-11-20 16:10:48.747788] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:16:12.854 [2024-11-20 16:10:48.747793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:12.854 [2024-11-20 16:10:48.748788] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:12.854 [2024-11-20 16:10:48.748795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:12.854 [2024-11-20 16:10:48.749798] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:12.854 [2024-11-20 16:10:48.749805] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:12.854 [2024-11-20 16:10:48.749809] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:12.854 [2024-11-20 16:10:48.749813] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:12.854 [2024-11-20 16:10:48.749919] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:16:12.854 [2024-11-20 16:10:48.749923] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:12.854 [2024-11-20 16:10:48.749926] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:12.854 [2024-11-20 16:10:48.750808] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:12.854 [2024-11-20 16:10:48.751814] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:12.854 [2024-11-20 16:10:48.752822] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:12.854 [2024-11-20 16:10:48.753825] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:12.854 [2024-11-20 16:10:48.753855] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:12.854 [2024-11-20 16:10:48.754835] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:12.854 [2024-11-20 16:10:48.754842] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:12.854 [2024-11-20 16:10:48.754846] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:12.854 [2024-11-20 16:10:48.754860] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:16:12.854 [2024-11-20 16:10:48.754869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:12.854 [2024-11-20 16:10:48.754879] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:12.854 [2024-11-20 16:10:48.754884] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:12.854 [2024-11-20 16:10:48.754887] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:12.854 [2024-11-20 16:10:48.754897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:12.855 [2024-11-20 16:10:48.762165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:12.855 [2024-11-20 16:10:48.762175] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:16:12.855 [2024-11-20 16:10:48.762179] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:16:12.855 [2024-11-20 16:10:48.762182] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:16:12.855 [2024-11-20 16:10:48.762185] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:12.855 [2024-11-20 16:10:48.762191] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:16:12.855 [2024-11-20 16:10:48.762194] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:16:12.855 [2024-11-20 16:10:48.762197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:16:12.855 [2024-11-20 16:10:48.762205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:12.855 [2024-11-20 16:10:48.762213] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:12.855 [2024-11-20 16:10:48.770164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:12.855 [2024-11-20 16:10:48.770174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.855 [2024-11-20 16:10:48.770180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.855 [2024-11-20 16:10:48.770186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.855 [2024-11-20 16:10:48.770193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.855 [2024-11-20 16:10:48.770196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:12.855 [2024-11-20 16:10:48.770201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:12.855 [2024-11-20 16:10:48.770208] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:12.855 [2024-11-20 16:10:48.778167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:12.855 [2024-11-20 16:10:48.778176] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:16:12.855 [2024-11-20 16:10:48.778180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:12.855 [2024-11-20 16:10:48.778185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:16:12.855 [2024-11-20 16:10:48.778190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:12.855 [2024-11-20 16:10:48.778197] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:12.855 [2024-11-20 16:10:48.786166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:12.855 [2024-11-20 16:10:48.786214] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:16:12.855 [2024-11-20 16:10:48.786219] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:12.855 [2024-11-20 16:10:48.786225] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:12.855 [2024-11-20 16:10:48.786228] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:12.855 [2024-11-20 16:10:48.786230] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:12.855 [2024-11-20 16:10:48.786235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:13.118 [2024-11-20 16:10:48.794165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:13.118 [2024-11-20 16:10:48.794174] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:16:13.118 [2024-11-20 16:10:48.794181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:16:13.118 [2024-11-20 16:10:48.794186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:13.118 [2024-11-20 16:10:48.794191] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:13.118 [2024-11-20 16:10:48.794194] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:13.118 [2024-11-20 16:10:48.794196] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:13.118 [2024-11-20 16:10:48.794201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:13.118 [2024-11-20 16:10:48.802165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:13.118 [2024-11-20 16:10:48.802178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:13.118 [2024-11-20 16:10:48.802184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:13.118 [2024-11-20 16:10:48.802189] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:13.118 [2024-11-20 16:10:48.802192] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:13.118 [2024-11-20 16:10:48.802194] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:13.118 [2024-11-20 16:10:48.802199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:13.118 [2024-11-20 16:10:48.810165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:13.118 [2024-11-20 16:10:48.810173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:13.118 [2024-11-20 16:10:48.810177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:13.118 [2024-11-20 16:10:48.810185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:16:13.118 [2024-11-20 16:10:48.810190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:13.118 [2024-11-20 16:10:48.810193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:13.118 [2024-11-20 16:10:48.810197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:16:13.118 [2024-11-20 16:10:48.810201] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:13.118 [2024-11-20 16:10:48.810204] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:16:13.118 [2024-11-20 16:10:48.810208] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:16:13.118 [2024-11-20 16:10:48.810220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:13.118 [2024-11-20 16:10:48.818166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:13.118 [2024-11-20 16:10:48.818177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:13.118 [2024-11-20 16:10:48.826164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:13.118 [2024-11-20 16:10:48.826174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:13.118 [2024-11-20 16:10:48.834165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:13.118 [2024-11-20 16:10:48.834175] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:13.118 [2024-11-20 16:10:48.842166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:13.118 [2024-11-20 16:10:48.842178] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:13.118 [2024-11-20 16:10:48.842181] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:13.118 [2024-11-20 16:10:48.842184] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:13.118 [2024-11-20 16:10:48.842186] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:13.118 [2024-11-20 16:10:48.842189] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:13.119 [2024-11-20 16:10:48.842193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:13.119 [2024-11-20 16:10:48.842199] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:13.119 [2024-11-20 16:10:48.842202] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:13.119 [2024-11-20 16:10:48.842205] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:13.119 [2024-11-20 16:10:48.842209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:13.119 [2024-11-20 16:10:48.842214] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:13.119 [2024-11-20 16:10:48.842219] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:13.119 [2024-11-20 16:10:48.842221] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:13.119 [2024-11-20 16:10:48.842225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:13.119 [2024-11-20 16:10:48.842231] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:13.119 [2024-11-20 16:10:48.842234] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:13.119 [2024-11-20 16:10:48.842237] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:13.119 [2024-11-20 16:10:48.842241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:13.119 [2024-11-20 16:10:48.850165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:13.119 [2024-11-20 16:10:48.850176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:13.119 [2024-11-20 16:10:48.850184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:13.119 [2024-11-20 16:10:48.850189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:13.119 ===================================================== 00:16:13.119 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:13.119 ===================================================== 00:16:13.119 Controller Capabilities/Features 00:16:13.119 ================================ 00:16:13.119 Vendor ID: 4e58 00:16:13.119 Subsystem Vendor ID: 4e58 00:16:13.119 Serial Number: SPDK2 00:16:13.119 Model Number: SPDK bdev Controller 00:16:13.119 Firmware Version: 25.01 00:16:13.119 Recommended Arb Burst: 6 00:16:13.119 IEEE OUI Identifier: 8d 6b 50 00:16:13.119 Multi-path I/O 00:16:13.119 May have multiple subsystem ports: Yes 00:16:13.119 May have multiple controllers: Yes 00:16:13.119 Associated with SR-IOV VF: No 00:16:13.119 Max Data Transfer Size: 131072 00:16:13.119 Max Number of Namespaces: 32 00:16:13.119 Max Number of I/O Queues: 127 00:16:13.119 NVMe Specification Version (VS): 1.3 00:16:13.119 NVMe Specification Version (Identify): 1.3 00:16:13.119 Maximum Queue Entries: 256 00:16:13.119 Contiguous Queues Required: Yes 00:16:13.119 Arbitration Mechanisms Supported 00:16:13.119 Weighted Round Robin: Not Supported 00:16:13.119 Vendor Specific: Not Supported 00:16:13.119 Reset Timeout: 15000 ms 00:16:13.119 Doorbell Stride: 4 bytes 00:16:13.119 NVM Subsystem Reset: Not Supported 00:16:13.119 Command Sets Supported 00:16:13.119 NVM Command Set: Supported 00:16:13.119 Boot Partition: Not Supported 00:16:13.119 Memory Page Size Minimum: 4096 bytes 00:16:13.119 Memory Page Size Maximum: 4096 bytes 00:16:13.119 Persistent Memory Region: Not Supported 00:16:13.119 Optional Asynchronous Events Supported 00:16:13.119 Namespace Attribute Notices: Supported 00:16:13.119 Firmware Activation Notices: Not Supported 00:16:13.119 ANA Change Notices: Not Supported 00:16:13.119 PLE Aggregate Log Change Notices: Not Supported 00:16:13.119 LBA Status Info Alert Notices: Not Supported 00:16:13.119 EGE Aggregate Log Change Notices: Not Supported 00:16:13.119 Normal NVM Subsystem Shutdown event: Not Supported 00:16:13.119 Zone Descriptor Change Notices: Not Supported 00:16:13.119 Discovery Log Change Notices: Not Supported 00:16:13.119 Controller Attributes 00:16:13.119 128-bit Host Identifier: Supported 00:16:13.119 Non-Operational Permissive Mode: Not Supported 00:16:13.119 NVM Sets: Not Supported 00:16:13.119 Read Recovery Levels: Not Supported 00:16:13.119 Endurance Groups: Not Supported 00:16:13.119 Predictable Latency Mode: Not Supported 00:16:13.119 Traffic Based Keep ALive: Not Supported 00:16:13.119 Namespace Granularity: Not Supported 00:16:13.119 SQ Associations: Not Supported 00:16:13.119 UUID List: Not Supported 00:16:13.119 Multi-Domain Subsystem: Not Supported 00:16:13.119 Fixed Capacity Management: Not Supported 00:16:13.119 Variable Capacity Management: Not Supported 00:16:13.119 Delete Endurance Group: Not Supported 00:16:13.119 Delete NVM Set: Not Supported 00:16:13.119 Extended LBA Formats Supported: Not Supported 00:16:13.119 Flexible Data Placement Supported: Not Supported 00:16:13.119 00:16:13.119 Controller Memory Buffer Support 00:16:13.119 ================================ 00:16:13.119 Supported: No 00:16:13.119 00:16:13.119 Persistent Memory Region Support 00:16:13.119 ================================ 00:16:13.119 Supported: No 00:16:13.119 00:16:13.119 Admin Command Set Attributes 00:16:13.119 ============================ 00:16:13.119 Security Send/Receive: Not Supported 00:16:13.119 Format NVM: Not Supported 00:16:13.119 Firmware Activate/Download: Not Supported 00:16:13.119 Namespace Management: Not Supported 00:16:13.119 Device Self-Test: Not Supported 00:16:13.119 Directives: Not Supported 00:16:13.119 NVMe-MI: Not Supported 00:16:13.119 Virtualization Management: Not Supported 00:16:13.119 Doorbell Buffer Config: Not Supported 00:16:13.119 Get LBA Status Capability: Not Supported 00:16:13.119 Command & Feature Lockdown Capability: Not Supported 00:16:13.119 Abort Command Limit: 4 00:16:13.119 Async Event Request Limit: 4 00:16:13.119 Number of Firmware Slots: N/A 00:16:13.119 Firmware Slot 1 Read-Only: N/A 00:16:13.119 Firmware Activation Without Reset: N/A 00:16:13.119 Multiple Update Detection Support: N/A 00:16:13.119 Firmware Update Granularity: No Information Provided 00:16:13.119 Per-Namespace SMART Log: No 00:16:13.119 Asymmetric Namespace Access Log Page: Not Supported 00:16:13.119 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:13.119 Command Effects Log Page: Supported 00:16:13.119 Get Log Page Extended Data: Supported 00:16:13.119 Telemetry Log Pages: Not Supported 00:16:13.119 Persistent Event Log Pages: Not Supported 00:16:13.119 Supported Log Pages Log Page: May Support 00:16:13.119 Commands Supported & Effects Log Page: Not Supported 00:16:13.119 Feature Identifiers & Effects Log Page:May Support 00:16:13.119 NVMe-MI Commands & Effects Log Page: May Support 00:16:13.119 Data Area 4 for Telemetry Log: Not Supported 00:16:13.119 Error Log Page Entries Supported: 128 00:16:13.119 Keep Alive: Supported 00:16:13.119 Keep Alive Granularity: 10000 ms 00:16:13.119 00:16:13.119 NVM Command Set Attributes 00:16:13.119 ========================== 00:16:13.119 Submission Queue Entry Size 00:16:13.119 Max: 64 00:16:13.119 Min: 64 00:16:13.119 Completion Queue Entry Size 00:16:13.119 Max: 16 00:16:13.119 Min: 16 00:16:13.119 Number of Namespaces: 32 00:16:13.119 Compare Command: Supported 00:16:13.119 Write Uncorrectable Command: Not Supported 00:16:13.119 Dataset Management Command: Supported 00:16:13.119 Write Zeroes Command: Supported 00:16:13.119 Set Features Save Field: Not Supported 00:16:13.119 Reservations: Not Supported 00:16:13.119 Timestamp: Not Supported 00:16:13.119 Copy: Supported 00:16:13.119 Volatile Write Cache: Present 00:16:13.119 Atomic Write Unit (Normal): 1 00:16:13.119 Atomic Write Unit (PFail): 1 00:16:13.119 Atomic Compare & Write Unit: 1 00:16:13.119 Fused Compare & Write: Supported 00:16:13.119 Scatter-Gather List 00:16:13.119 SGL Command Set: Supported (Dword aligned) 00:16:13.119 SGL Keyed: Not Supported 00:16:13.119 SGL Bit Bucket Descriptor: Not Supported 00:16:13.119 SGL Metadata Pointer: Not Supported 00:16:13.119 Oversized SGL: Not Supported 00:16:13.119 SGL Metadata Address: Not Supported 00:16:13.119 SGL Offset: Not Supported 00:16:13.119 Transport SGL Data Block: Not Supported 00:16:13.119 Replay Protected Memory Block: Not Supported 00:16:13.119 00:16:13.119 Firmware Slot Information 00:16:13.119 ========================= 00:16:13.119 Active slot: 1 00:16:13.119 Slot 1 Firmware Revision: 25.01 00:16:13.119 00:16:13.119 00:16:13.119 Commands Supported and Effects 00:16:13.119 ============================== 00:16:13.119 Admin Commands 00:16:13.119 -------------- 00:16:13.119 Get Log Page (02h): Supported 00:16:13.119 Identify (06h): Supported 00:16:13.119 Abort (08h): Supported 00:16:13.119 Set Features (09h): Supported 00:16:13.119 Get Features (0Ah): Supported 00:16:13.119 Asynchronous Event Request (0Ch): Supported 00:16:13.120 Keep Alive (18h): Supported 00:16:13.120 I/O Commands 00:16:13.120 ------------ 00:16:13.120 Flush (00h): Supported LBA-Change 00:16:13.120 Write (01h): Supported LBA-Change 00:16:13.120 Read (02h): Supported 00:16:13.120 Compare (05h): Supported 00:16:13.120 Write Zeroes (08h): Supported LBA-Change 00:16:13.120 Dataset Management (09h): Supported LBA-Change 00:16:13.120 Copy (19h): Supported LBA-Change 00:16:13.120 00:16:13.120 Error Log 00:16:13.120 ========= 00:16:13.120 00:16:13.120 Arbitration 00:16:13.120 =========== 00:16:13.120 Arbitration Burst: 1 00:16:13.120 00:16:13.120 Power Management 00:16:13.120 ================ 00:16:13.120 Number of Power States: 1 00:16:13.120 Current Power State: Power State #0 00:16:13.120 Power State #0: 00:16:13.120 Max Power: 0.00 W 00:16:13.120 Non-Operational State: Operational 00:16:13.120 Entry Latency: Not Reported 00:16:13.120 Exit Latency: Not Reported 00:16:13.120 Relative Read Throughput: 0 00:16:13.120 Relative Read Latency: 0 00:16:13.120 Relative Write Throughput: 0 00:16:13.120 Relative Write Latency: 0 00:16:13.120 Idle Power: Not Reported 00:16:13.120 Active Power: Not Reported 00:16:13.120 Non-Operational Permissive Mode: Not Supported 00:16:13.120 00:16:13.120 Health Information 00:16:13.120 ================== 00:16:13.120 Critical Warnings: 00:16:13.120 Available Spare Space: OK 00:16:13.120 Temperature: OK 00:16:13.120 Device Reliability: OK 00:16:13.120 Read Only: No 00:16:13.120 Volatile Memory Backup: OK 00:16:13.120 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:13.120 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:13.120 Available Spare: 0% 00:16:13.120 Available Sp[2024-11-20 16:10:48.850262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:13.120 [2024-11-20 16:10:48.858165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:13.120 [2024-11-20 16:10:48.858187] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:16:13.120 [2024-11-20 16:10:48.858194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.120 [2024-11-20 16:10:48.858199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.120 [2024-11-20 16:10:48.858203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.120 [2024-11-20 16:10:48.858208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.120 [2024-11-20 16:10:48.858239] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:13.120 [2024-11-20 16:10:48.858246] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:13.120 [2024-11-20 16:10:48.859243] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:13.120 [2024-11-20 16:10:48.859280] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:16:13.120 [2024-11-20 16:10:48.859284] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:16:13.120 [2024-11-20 16:10:48.860250] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:13.120 [2024-11-20 16:10:48.860258] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:16:13.120 [2024-11-20 16:10:48.860301] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:13.120 [2024-11-20 16:10:48.861267] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:13.120 are Threshold: 0% 00:16:13.120 Life Percentage Used: 0% 00:16:13.120 Data Units Read: 0 00:16:13.120 Data Units Written: 0 00:16:13.120 Host Read Commands: 0 00:16:13.120 Host Write Commands: 0 00:16:13.120 Controller Busy Time: 0 minutes 00:16:13.120 Power Cycles: 0 00:16:13.120 Power On Hours: 0 hours 00:16:13.120 Unsafe Shutdowns: 0 00:16:13.120 Unrecoverable Media Errors: 0 00:16:13.120 Lifetime Error Log Entries: 0 00:16:13.120 Warning Temperature Time: 0 minutes 00:16:13.120 Critical Temperature Time: 0 minutes 00:16:13.120 00:16:13.120 Number of Queues 00:16:13.120 ================ 00:16:13.120 Number of I/O Submission Queues: 127 00:16:13.120 Number of I/O Completion Queues: 127 00:16:13.120 00:16:13.120 Active Namespaces 00:16:13.120 ================= 00:16:13.120 Namespace ID:1 00:16:13.120 Error Recovery Timeout: Unlimited 00:16:13.120 Command Set Identifier: NVM (00h) 00:16:13.120 Deallocate: Supported 00:16:13.120 Deallocated/Unwritten Error: Not Supported 00:16:13.120 Deallocated Read Value: Unknown 00:16:13.120 Deallocate in Write Zeroes: Not Supported 00:16:13.120 Deallocated Guard Field: 0xFFFF 00:16:13.120 Flush: Supported 00:16:13.120 Reservation: Supported 00:16:13.120 Namespace Sharing Capabilities: Multiple Controllers 00:16:13.120 Size (in LBAs): 131072 (0GiB) 00:16:13.120 Capacity (in LBAs): 131072 (0GiB) 00:16:13.120 Utilization (in LBAs): 131072 (0GiB) 00:16:13.120 NGUID: A88BC5EAF95E48CB83F04C61BE85F195 00:16:13.120 UUID: a88bc5ea-f95e-48cb-83f0-4c61be85f195 00:16:13.120 Thin Provisioning: Not Supported 00:16:13.120 Per-NS Atomic Units: Yes 00:16:13.120 Atomic Boundary Size (Normal): 0 00:16:13.120 Atomic Boundary Size (PFail): 0 00:16:13.120 Atomic Boundary Offset: 0 00:16:13.120 Maximum Single Source Range Length: 65535 00:16:13.120 Maximum Copy Length: 65535 00:16:13.120 Maximum Source Range Count: 1 00:16:13.120 NGUID/EUI64 Never Reused: No 00:16:13.120 Namespace Write Protected: No 00:16:13.120 Number of LBA Formats: 1 00:16:13.120 Current LBA Format: LBA Format #00 00:16:13.120 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:13.120 00:16:13.120 16:10:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:13.120 [2024-11-20 16:10:49.051220] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:18.410 Initializing NVMe Controllers 00:16:18.410 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:18.410 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:18.410 Initialization complete. Launching workers. 00:16:18.410 ======================================================== 00:16:18.410 Latency(us) 00:16:18.410 Device Information : IOPS MiB/s Average min max 00:16:18.410 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39971.57 156.14 3201.95 846.08 7766.89 00:16:18.410 ======================================================== 00:16:18.410 Total : 39971.57 156.14 3201.95 846.08 7766.89 00:16:18.410 00:16:18.410 [2024-11-20 16:10:54.156358] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:18.410 16:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:18.671 [2024-11-20 16:10:54.346944] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:23.959 Initializing NVMe Controllers 00:16:23.959 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:23.959 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:23.959 Initialization complete. Launching workers. 00:16:23.959 ======================================================== 00:16:23.959 Latency(us) 00:16:23.959 Device Information : IOPS MiB/s Average min max 00:16:23.959 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39995.16 156.23 3200.83 911.12 9648.38 00:16:23.959 ======================================================== 00:16:23.959 Total : 39995.16 156.23 3200.83 911.12 9648.38 00:16:23.959 00:16:23.959 [2024-11-20 16:10:59.364421] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:23.959 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:23.960 [2024-11-20 16:10:59.567614] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:29.243 [2024-11-20 16:11:04.692253] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:29.243 Initializing NVMe Controllers 00:16:29.243 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:29.243 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:29.243 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:29.243 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:29.243 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:29.243 Initialization complete. Launching workers. 00:16:29.243 Starting thread on core 2 00:16:29.243 Starting thread on core 3 00:16:29.243 Starting thread on core 1 00:16:29.243 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:29.243 [2024-11-20 16:11:04.940531] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:32.543 [2024-11-20 16:11:07.998207] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:32.543 Initializing NVMe Controllers 00:16:32.543 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:32.543 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:32.543 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:32.543 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:32.543 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:32.543 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:32.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:32.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:32.543 Initialization complete. Launching workers. 00:16:32.543 Starting thread on core 1 with urgent priority queue 00:16:32.543 Starting thread on core 2 with urgent priority queue 00:16:32.543 Starting thread on core 3 with urgent priority queue 00:16:32.543 Starting thread on core 0 with urgent priority queue 00:16:32.544 SPDK bdev Controller (SPDK2 ) core 0: 16108.33 IO/s 6.21 secs/100000 ios 00:16:32.544 SPDK bdev Controller (SPDK2 ) core 1: 12120.33 IO/s 8.25 secs/100000 ios 00:16:32.544 SPDK bdev Controller (SPDK2 ) core 2: 8636.67 IO/s 11.58 secs/100000 ios 00:16:32.544 SPDK bdev Controller (SPDK2 ) core 3: 9971.00 IO/s 10.03 secs/100000 ios 00:16:32.544 ======================================================== 00:16:32.544 00:16:32.544 16:11:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:32.544 [2024-11-20 16:11:08.243538] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:32.544 Initializing NVMe Controllers 00:16:32.544 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:32.544 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:32.544 Namespace ID: 1 size: 0GB 00:16:32.544 Initialization complete. 00:16:32.544 INFO: using host memory buffer for IO 00:16:32.544 Hello world! 00:16:32.544 [2024-11-20 16:11:08.253599] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:32.544 16:11:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:32.805 [2024-11-20 16:11:08.485514] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:33.747 Initializing NVMe Controllers 00:16:33.747 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:33.747 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:33.747 Initialization complete. Launching workers. 00:16:33.747 submit (in ns) avg, min, max = 6888.4, 2832.5, 3999374.2 00:16:33.747 complete (in ns) avg, min, max = 16778.1, 1669.2, 3998229.2 00:16:33.747 00:16:33.747 Submit histogram 00:16:33.747 ================ 00:16:33.747 Range in us Cumulative Count 00:16:33.747 2.827 - 2.840: 0.1171% ( 24) 00:16:33.747 2.840 - 2.853: 0.9566% ( 172) 00:16:33.747 2.853 - 2.867: 2.5866% ( 334) 00:16:33.747 2.867 - 2.880: 6.2177% ( 744) 00:16:33.747 2.880 - 2.893: 11.7521% ( 1134) 00:16:33.747 2.893 - 2.907: 17.7062% ( 1220) 00:16:33.747 2.907 - 2.920: 22.1474% ( 910) 00:16:33.747 2.920 - 2.933: 27.6623% ( 1130) 00:16:33.747 2.933 - 2.947: 32.5671% ( 1005) 00:16:33.747 2.947 - 2.960: 37.7696% ( 1066) 00:16:33.747 2.960 - 2.973: 43.0747% ( 1087) 00:16:33.747 2.973 - 2.987: 48.7994% ( 1173) 00:16:33.747 2.987 - 3.000: 55.5246% ( 1378) 00:16:33.747 3.000 - 3.013: 63.6018% ( 1655) 00:16:33.747 3.013 - 3.027: 71.9863% ( 1718) 00:16:33.747 3.027 - 3.040: 80.1269% ( 1668) 00:16:33.747 3.040 - 3.053: 86.9839% ( 1405) 00:16:33.747 3.053 - 3.067: 92.8892% ( 1210) 00:16:33.747 3.067 - 3.080: 96.4763% ( 735) 00:16:33.747 3.080 - 3.093: 98.2577% ( 365) 00:16:33.747 3.093 - 3.107: 98.9751% ( 147) 00:16:33.747 3.107 - 3.120: 99.2972% ( 66) 00:16:33.747 3.120 - 3.133: 99.4241% ( 26) 00:16:33.747 3.133 - 3.147: 99.4876% ( 13) 00:16:33.747 3.147 - 3.160: 99.4973% ( 2) 00:16:33.747 3.160 - 3.173: 99.5071% ( 2) 00:16:33.747 3.173 - 3.187: 99.5217% ( 3) 00:16:33.747 3.200 - 3.213: 99.5266% ( 1) 00:16:33.747 3.253 - 3.267: 99.5315% ( 1) 00:16:33.747 3.267 - 3.280: 99.5364% ( 1) 00:16:33.747 3.293 - 3.307: 99.5412% ( 1) 00:16:33.747 3.733 - 3.760: 99.5461% ( 1) 00:16:33.747 4.053 - 4.080: 99.5510% ( 1) 00:16:33.747 4.187 - 4.213: 99.5559% ( 1) 00:16:33.748 4.267 - 4.293: 99.5608% ( 1) 00:16:33.748 4.293 - 4.320: 99.5656% ( 1) 00:16:33.748 4.320 - 4.347: 99.5705% ( 1) 00:16:33.748 4.453 - 4.480: 99.5754% ( 1) 00:16:33.748 4.480 - 4.507: 99.5803% ( 1) 00:16:33.748 4.613 - 4.640: 99.5852% ( 1) 00:16:33.748 4.640 - 4.667: 99.5900% ( 1) 00:16:33.748 4.693 - 4.720: 99.5998% ( 2) 00:16:33.748 4.720 - 4.747: 99.6047% ( 1) 00:16:33.748 4.773 - 4.800: 99.6144% ( 2) 00:16:33.748 4.800 - 4.827: 99.6193% ( 1) 00:16:33.748 4.827 - 4.853: 99.6291% ( 2) 00:16:33.748 4.907 - 4.933: 99.6388% ( 2) 00:16:33.748 4.933 - 4.960: 99.6437% ( 1) 00:16:33.748 4.960 - 4.987: 99.6486% ( 1) 00:16:33.748 4.987 - 5.013: 99.6535% ( 1) 00:16:33.748 5.013 - 5.040: 99.6633% ( 2) 00:16:33.748 5.040 - 5.067: 99.6681% ( 1) 00:16:33.748 5.067 - 5.093: 99.6730% ( 1) 00:16:33.748 5.093 - 5.120: 99.6877% ( 3) 00:16:33.748 5.120 - 5.147: 99.6974% ( 2) 00:16:33.748 5.253 - 5.280: 99.7023% ( 1) 00:16:33.748 5.307 - 5.333: 99.7072% ( 1) 00:16:33.748 5.360 - 5.387: 99.7121% ( 1) 00:16:33.748 5.387 - 5.413: 99.7169% ( 1) 00:16:33.748 5.547 - 5.573: 99.7218% ( 1) 00:16:33.748 5.573 - 5.600: 99.7267% ( 1) 00:16:33.748 5.680 - 5.707: 99.7365% ( 2) 00:16:33.748 5.760 - 5.787: 99.7413% ( 1) 00:16:33.748 5.787 - 5.813: 99.7462% ( 1) 00:16:33.748 5.973 - 6.000: 99.7511% ( 1) 00:16:33.748 6.000 - 6.027: 99.7560% ( 1) 00:16:33.748 6.027 - 6.053: 99.7657% ( 2) 00:16:33.748 6.080 - 6.107: 99.7706% ( 1) 00:16:33.748 6.107 - 6.133: 99.7755% ( 1) 00:16:33.748 6.133 - 6.160: 99.7901% ( 3) 00:16:33.748 6.187 - 6.213: 99.7950% ( 1) 00:16:33.748 6.240 - 6.267: 99.7999% ( 1) 00:16:33.748 6.347 - 6.373: 99.8048% ( 1) 00:16:33.748 6.373 - 6.400: 99.8194% ( 3) 00:16:33.748 6.400 - 6.427: 99.8243% ( 1) 00:16:33.748 6.560 - 6.587: 99.8292% ( 1) 00:16:33.748 6.640 - 6.667: 99.8341% ( 1) 00:16:33.748 6.667 - 6.693: 99.8389% ( 1) 00:16:33.748 6.693 - 6.720: 99.8438% ( 1) 00:16:33.748 [2024-11-20 16:11:09.579687] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:33.748 6.773 - 6.800: 99.8487% ( 1) 00:16:33.748 6.800 - 6.827: 99.8585% ( 2) 00:16:33.748 6.880 - 6.933: 99.8682% ( 2) 00:16:33.748 7.147 - 7.200: 99.8780% ( 2) 00:16:33.748 7.360 - 7.413: 99.8829% ( 1) 00:16:33.748 7.573 - 7.627: 99.8878% ( 1) 00:16:33.748 7.893 - 7.947: 99.8926% ( 1) 00:16:33.748 8.267 - 8.320: 99.8975% ( 1) 00:16:33.748 12.053 - 12.107: 99.9024% ( 1) 00:16:33.748 3986.773 - 4014.080: 100.0000% ( 20) 00:16:33.748 00:16:33.748 Complete histogram 00:16:33.748 ================== 00:16:33.748 Range in us Cumulative Count 00:16:33.748 1.667 - 1.673: 0.0049% ( 1) 00:16:33.748 1.680 - 1.687: 0.0098% ( 1) 00:16:33.748 1.687 - 1.693: 0.0195% ( 2) 00:16:33.748 1.693 - 1.700: 0.1171% ( 20) 00:16:33.748 1.700 - 1.707: 0.5027% ( 79) 00:16:33.748 1.707 - 1.720: 2.1523% ( 338) 00:16:33.748 1.720 - 1.733: 10.4392% ( 1698) 00:16:33.748 1.733 - 1.747: 20.6442% ( 2091) 00:16:33.748 1.747 - 1.760: 61.2201% ( 8314) 00:16:33.748 1.760 - 1.773: 78.9263% ( 3628) 00:16:33.748 1.773 - 1.787: 83.3626% ( 909) 00:16:33.748 1.787 - 1.800: 84.8804% ( 311) 00:16:33.748 1.800 - 1.813: 87.9697% ( 633) 00:16:33.748 1.813 - 1.827: 92.8209% ( 994) 00:16:33.748 1.827 - 1.840: 97.0425% ( 865) 00:16:33.748 1.840 - 1.853: 98.9263% ( 386) 00:16:33.748 1.853 - 1.867: 99.3753% ( 92) 00:16:33.748 1.867 - 1.880: 99.4290% ( 11) 00:16:33.748 1.880 - 1.893: 99.4388% ( 2) 00:16:33.748 1.893 - 1.907: 99.4436% ( 1) 00:16:33.748 1.920 - 1.933: 99.4485% ( 1) 00:16:33.748 1.947 - 1.960: 99.4534% ( 1) 00:16:33.748 3.413 - 3.440: 99.4632% ( 2) 00:16:33.748 3.440 - 3.467: 99.4680% ( 1) 00:16:33.748 3.493 - 3.520: 99.4729% ( 1) 00:16:33.748 3.547 - 3.573: 99.4778% ( 1) 00:16:33.748 3.573 - 3.600: 99.4876% ( 2) 00:16:33.748 3.600 - 3.627: 99.4924% ( 1) 00:16:33.748 3.840 - 3.867: 99.4973% ( 1) 00:16:33.748 3.893 - 3.920: 99.5071% ( 2) 00:16:33.748 4.213 - 4.240: 99.5120% ( 1) 00:16:33.748 4.347 - 4.373: 99.5168% ( 1) 00:16:33.748 4.373 - 4.400: 99.5217% ( 1) 00:16:33.748 4.480 - 4.507: 99.5266% ( 1) 00:16:33.748 4.560 - 4.587: 99.5315% ( 1) 00:16:33.748 4.587 - 4.613: 99.5364% ( 1) 00:16:33.748 4.773 - 4.800: 99.5461% ( 2) 00:16:33.748 4.800 - 4.827: 99.5510% ( 1) 00:16:33.748 4.853 - 4.880: 99.5559% ( 1) 00:16:33.748 4.880 - 4.907: 99.5656% ( 2) 00:16:33.748 4.907 - 4.933: 99.5705% ( 1) 00:16:33.748 5.173 - 5.200: 99.5754% ( 1) 00:16:33.748 5.333 - 5.360: 99.5852% ( 2) 00:16:33.748 5.413 - 5.440: 99.5900% ( 1) 00:16:33.748 5.467 - 5.493: 99.5949% ( 1) 00:16:33.748 5.520 - 5.547: 99.5998% ( 1) 00:16:33.748 5.707 - 5.733: 99.6047% ( 1) 00:16:33.748 5.867 - 5.893: 99.6096% ( 1) 00:16:33.748 6.107 - 6.133: 99.6144% ( 1) 00:16:33.748 9.067 - 9.120: 99.6193% ( 1) 00:16:33.748 10.773 - 10.827: 99.6242% ( 1) 00:16:33.748 3986.773 - 4014.080: 100.0000% ( 77) 00:16:33.748 00:16:33.748 16:11:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:33.748 16:11:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:33.748 16:11:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:33.748 16:11:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:33.748 16:11:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:34.008 [ 00:16:34.008 { 00:16:34.008 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:34.008 "subtype": "Discovery", 00:16:34.008 "listen_addresses": [], 00:16:34.008 "allow_any_host": true, 00:16:34.008 "hosts": [] 00:16:34.008 }, 00:16:34.008 { 00:16:34.008 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:34.008 "subtype": "NVMe", 00:16:34.008 "listen_addresses": [ 00:16:34.008 { 00:16:34.008 "trtype": "VFIOUSER", 00:16:34.008 "adrfam": "IPv4", 00:16:34.008 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:34.008 "trsvcid": "0" 00:16:34.008 } 00:16:34.008 ], 00:16:34.008 "allow_any_host": true, 00:16:34.008 "hosts": [], 00:16:34.008 "serial_number": "SPDK1", 00:16:34.008 "model_number": "SPDK bdev Controller", 00:16:34.008 "max_namespaces": 32, 00:16:34.008 "min_cntlid": 1, 00:16:34.008 "max_cntlid": 65519, 00:16:34.008 "namespaces": [ 00:16:34.008 { 00:16:34.008 "nsid": 1, 00:16:34.008 "bdev_name": "Malloc1", 00:16:34.008 "name": "Malloc1", 00:16:34.008 "nguid": "C80636EF8DE24AF5BE2657C7CA3B6744", 00:16:34.008 "uuid": "c80636ef-8de2-4af5-be26-57c7ca3b6744" 00:16:34.008 }, 00:16:34.008 { 00:16:34.008 "nsid": 2, 00:16:34.008 "bdev_name": "Malloc3", 00:16:34.008 "name": "Malloc3", 00:16:34.008 "nguid": "BC2E4ED3121846B3B7DFADEFF9CAC5F4", 00:16:34.008 "uuid": "bc2e4ed3-1218-46b3-b7df-adeff9cac5f4" 00:16:34.008 } 00:16:34.008 ] 00:16:34.008 }, 00:16:34.008 { 00:16:34.008 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:34.008 "subtype": "NVMe", 00:16:34.008 "listen_addresses": [ 00:16:34.008 { 00:16:34.008 "trtype": "VFIOUSER", 00:16:34.008 "adrfam": "IPv4", 00:16:34.008 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:34.008 "trsvcid": "0" 00:16:34.008 } 00:16:34.008 ], 00:16:34.008 "allow_any_host": true, 00:16:34.008 "hosts": [], 00:16:34.008 "serial_number": "SPDK2", 00:16:34.008 "model_number": "SPDK bdev Controller", 00:16:34.008 "max_namespaces": 32, 00:16:34.008 "min_cntlid": 1, 00:16:34.008 "max_cntlid": 65519, 00:16:34.008 "namespaces": [ 00:16:34.008 { 00:16:34.008 "nsid": 1, 00:16:34.008 "bdev_name": "Malloc2", 00:16:34.008 "name": "Malloc2", 00:16:34.008 "nguid": "A88BC5EAF95E48CB83F04C61BE85F195", 00:16:34.008 "uuid": "a88bc5ea-f95e-48cb-83f0-4c61be85f195" 00:16:34.008 } 00:16:34.008 ] 00:16:34.008 } 00:16:34.008 ] 00:16:34.009 16:11:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:34.009 16:11:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:34.009 16:11:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1241422 00:16:34.009 16:11:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:34.009 16:11:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:34.009 16:11:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:34.009 16:11:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:34.009 16:11:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:34.009 16:11:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:34.009 16:11:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:34.009 [2024-11-20 16:11:09.927058] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:34.270 Malloc4 00:16:34.270 16:11:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:34.270 [2024-11-20 16:11:10.153625] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:34.270 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:34.270 Asynchronous Event Request test 00:16:34.270 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:34.270 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:34.270 Registering asynchronous event callbacks... 00:16:34.270 Starting namespace attribute notice tests for all controllers... 00:16:34.270 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:34.270 aer_cb - Changed Namespace 00:16:34.270 Cleaning up... 00:16:34.531 [ 00:16:34.531 { 00:16:34.531 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:34.531 "subtype": "Discovery", 00:16:34.531 "listen_addresses": [], 00:16:34.531 "allow_any_host": true, 00:16:34.531 "hosts": [] 00:16:34.531 }, 00:16:34.531 { 00:16:34.531 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:34.531 "subtype": "NVMe", 00:16:34.531 "listen_addresses": [ 00:16:34.531 { 00:16:34.531 "trtype": "VFIOUSER", 00:16:34.531 "adrfam": "IPv4", 00:16:34.531 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:34.531 "trsvcid": "0" 00:16:34.531 } 00:16:34.531 ], 00:16:34.531 "allow_any_host": true, 00:16:34.531 "hosts": [], 00:16:34.531 "serial_number": "SPDK1", 00:16:34.531 "model_number": "SPDK bdev Controller", 00:16:34.531 "max_namespaces": 32, 00:16:34.531 "min_cntlid": 1, 00:16:34.531 "max_cntlid": 65519, 00:16:34.531 "namespaces": [ 00:16:34.531 { 00:16:34.531 "nsid": 1, 00:16:34.531 "bdev_name": "Malloc1", 00:16:34.531 "name": "Malloc1", 00:16:34.531 "nguid": "C80636EF8DE24AF5BE2657C7CA3B6744", 00:16:34.531 "uuid": "c80636ef-8de2-4af5-be26-57c7ca3b6744" 00:16:34.531 }, 00:16:34.531 { 00:16:34.531 "nsid": 2, 00:16:34.531 "bdev_name": "Malloc3", 00:16:34.531 "name": "Malloc3", 00:16:34.531 "nguid": "BC2E4ED3121846B3B7DFADEFF9CAC5F4", 00:16:34.531 "uuid": "bc2e4ed3-1218-46b3-b7df-adeff9cac5f4" 00:16:34.531 } 00:16:34.531 ] 00:16:34.531 }, 00:16:34.531 { 00:16:34.531 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:34.531 "subtype": "NVMe", 00:16:34.531 "listen_addresses": [ 00:16:34.531 { 00:16:34.531 "trtype": "VFIOUSER", 00:16:34.531 "adrfam": "IPv4", 00:16:34.531 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:34.531 "trsvcid": "0" 00:16:34.531 } 00:16:34.531 ], 00:16:34.531 "allow_any_host": true, 00:16:34.531 "hosts": [], 00:16:34.531 "serial_number": "SPDK2", 00:16:34.531 "model_number": "SPDK bdev Controller", 00:16:34.531 "max_namespaces": 32, 00:16:34.531 "min_cntlid": 1, 00:16:34.531 "max_cntlid": 65519, 00:16:34.531 "namespaces": [ 00:16:34.531 { 00:16:34.531 "nsid": 1, 00:16:34.531 "bdev_name": "Malloc2", 00:16:34.531 "name": "Malloc2", 00:16:34.531 "nguid": "A88BC5EAF95E48CB83F04C61BE85F195", 00:16:34.531 "uuid": "a88bc5ea-f95e-48cb-83f0-4c61be85f195" 00:16:34.531 }, 00:16:34.531 { 00:16:34.531 "nsid": 2, 00:16:34.531 "bdev_name": "Malloc4", 00:16:34.531 "name": "Malloc4", 00:16:34.531 "nguid": "8A09631D52BE40F2A855C85BF3FCA640", 00:16:34.531 "uuid": "8a09631d-52be-40f2-a855-c85bf3fca640" 00:16:34.531 } 00:16:34.531 ] 00:16:34.531 } 00:16:34.531 ] 00:16:34.531 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1241422 00:16:34.531 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:34.531 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1232490 00:16:34.531 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1232490 ']' 00:16:34.531 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1232490 00:16:34.531 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:34.531 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:34.531 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1232490 00:16:34.532 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:34.532 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:34.532 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1232490' 00:16:34.532 killing process with pid 1232490 00:16:34.532 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1232490 00:16:34.532 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1232490 00:16:34.793 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:34.793 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:34.793 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:34.793 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:34.793 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:34.793 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1241585 00:16:34.793 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1241585' 00:16:34.793 Process pid: 1241585 00:16:34.793 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:34.793 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:34.793 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1241585 00:16:34.793 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1241585 ']' 00:16:34.793 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.793 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:34.793 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.793 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:34.793 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:34.793 [2024-11-20 16:11:10.628856] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:34.793 [2024-11-20 16:11:10.629798] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:16:34.793 [2024-11-20 16:11:10.629841] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.793 [2024-11-20 16:11:10.716906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:35.054 [2024-11-20 16:11:10.747161] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.054 [2024-11-20 16:11:10.747192] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.054 [2024-11-20 16:11:10.747198] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.054 [2024-11-20 16:11:10.747203] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.054 [2024-11-20 16:11:10.747207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.054 [2024-11-20 16:11:10.748347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.054 [2024-11-20 16:11:10.748501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.054 [2024-11-20 16:11:10.748646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.054 [2024-11-20 16:11:10.748648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:35.054 [2024-11-20 16:11:10.798834] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:35.054 [2024-11-20 16:11:10.799707] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:35.054 [2024-11-20 16:11:10.800649] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:35.054 [2024-11-20 16:11:10.801294] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:35.054 [2024-11-20 16:11:10.801309] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:35.625 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.625 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:35.625 16:11:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:36.567 16:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:36.828 16:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:36.828 16:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:36.828 16:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:36.828 16:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:36.828 16:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:37.089 Malloc1 00:16:37.089 16:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:37.350 16:11:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:37.350 16:11:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:37.610 16:11:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:37.610 16:11:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:37.610 16:11:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:37.870 Malloc2 00:16:37.870 16:11:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:37.870 16:11:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:38.130 16:11:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:38.391 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:38.391 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1241585 00:16:38.391 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1241585 ']' 00:16:38.391 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1241585 00:16:38.391 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:38.391 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.391 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1241585 00:16:38.391 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:38.391 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:38.391 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1241585' 00:16:38.391 killing process with pid 1241585 00:16:38.391 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1241585 00:16:38.391 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1241585 00:16:38.652 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:38.652 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:38.652 00:16:38.652 real 0m50.901s 00:16:38.652 user 3m15.074s 00:16:38.652 sys 0m2.689s 00:16:38.652 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.652 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:38.652 ************************************ 00:16:38.652 END TEST nvmf_vfio_user 00:16:38.652 ************************************ 00:16:38.652 16:11:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:38.652 16:11:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:38.652 16:11:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.652 16:11:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:38.652 ************************************ 00:16:38.652 START TEST nvmf_vfio_user_nvme_compliance 00:16:38.652 ************************************ 00:16:38.652 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:38.652 * Looking for test storage... 00:16:38.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:38.652 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:38.652 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:16:38.652 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:38.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.914 --rc genhtml_branch_coverage=1 00:16:38.914 --rc genhtml_function_coverage=1 00:16:38.914 --rc genhtml_legend=1 00:16:38.914 --rc geninfo_all_blocks=1 00:16:38.914 --rc geninfo_unexecuted_blocks=1 00:16:38.914 00:16:38.914 ' 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:38.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.914 --rc genhtml_branch_coverage=1 00:16:38.914 --rc genhtml_function_coverage=1 00:16:38.914 --rc genhtml_legend=1 00:16:38.914 --rc geninfo_all_blocks=1 00:16:38.914 --rc geninfo_unexecuted_blocks=1 00:16:38.914 00:16:38.914 ' 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:38.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.914 --rc genhtml_branch_coverage=1 00:16:38.914 --rc genhtml_function_coverage=1 00:16:38.914 --rc genhtml_legend=1 00:16:38.914 --rc geninfo_all_blocks=1 00:16:38.914 --rc geninfo_unexecuted_blocks=1 00:16:38.914 00:16:38.914 ' 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:38.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.914 --rc genhtml_branch_coverage=1 00:16:38.914 --rc genhtml_function_coverage=1 00:16:38.914 --rc genhtml_legend=1 00:16:38.914 --rc geninfo_all_blocks=1 00:16:38.914 --rc geninfo_unexecuted_blocks=1 00:16:38.914 00:16:38.914 ' 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.914 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:38.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1242364 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1242364' 00:16:38.915 Process pid: 1242364 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1242364 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1242364 ']' 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:38.915 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:38.915 [2024-11-20 16:11:14.728955] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:16:38.915 [2024-11-20 16:11:14.729031] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.915 [2024-11-20 16:11:14.817601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:39.177 [2024-11-20 16:11:14.852070] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.177 [2024-11-20 16:11:14.852103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.177 [2024-11-20 16:11:14.852109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.177 [2024-11-20 16:11:14.852114] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.177 [2024-11-20 16:11:14.852117] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.177 [2024-11-20 16:11:14.853500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.177 [2024-11-20 16:11:14.853651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.177 [2024-11-20 16:11:14.853654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.748 16:11:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:39.748 16:11:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:16:39.748 16:11:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:40.707 malloc0 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.707 16:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:41.035 00:16:41.035 00:16:41.035 CUnit - A unit testing framework for C - Version 2.1-3 00:16:41.035 http://cunit.sourceforge.net/ 00:16:41.035 00:16:41.035 00:16:41.035 Suite: nvme_compliance 00:16:41.035 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 16:11:16.783572] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:41.035 [2024-11-20 16:11:16.784872] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:41.035 [2024-11-20 16:11:16.784884] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:41.035 [2024-11-20 16:11:16.784889] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:41.035 [2024-11-20 16:11:16.788599] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:41.035 passed 00:16:41.035 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 16:11:16.864097] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:41.035 [2024-11-20 16:11:16.867111] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:41.035 passed 00:16:41.035 Test: admin_identify_ns ...[2024-11-20 16:11:16.944790] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:41.328 [2024-11-20 16:11:17.004166] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:41.328 [2024-11-20 16:11:17.012168] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:41.328 [2024-11-20 16:11:17.033262] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:41.329 passed 00:16:41.329 Test: admin_get_features_mandatory_features ...[2024-11-20 16:11:17.115322] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:41.329 [2024-11-20 16:11:17.118340] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:41.329 passed 00:16:41.329 Test: admin_get_features_optional_features ...[2024-11-20 16:11:17.196804] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:41.329 [2024-11-20 16:11:17.199828] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:41.329 passed 00:16:41.591 Test: admin_set_features_number_of_queues ...[2024-11-20 16:11:17.276909] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:41.591 [2024-11-20 16:11:17.381246] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:41.591 passed 00:16:41.591 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 16:11:17.462303] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:41.591 [2024-11-20 16:11:17.465330] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:41.591 passed 00:16:41.851 Test: admin_get_log_page_with_lpo ...[2024-11-20 16:11:17.540766] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:41.851 [2024-11-20 16:11:17.612167] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:41.851 [2024-11-20 16:11:17.625213] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:41.851 passed 00:16:41.851 Test: fabric_property_get ...[2024-11-20 16:11:17.699502] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:41.851 [2024-11-20 16:11:17.700702] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:41.851 [2024-11-20 16:11:17.702517] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:41.851 passed 00:16:41.851 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 16:11:17.780979] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:41.851 [2024-11-20 16:11:17.782183] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:41.851 [2024-11-20 16:11:17.784003] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:42.111 passed 00:16:42.111 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 16:11:17.863751] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:42.111 [2024-11-20 16:11:17.947164] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:42.111 [2024-11-20 16:11:17.963165] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:42.111 [2024-11-20 16:11:17.968241] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:42.111 passed 00:16:42.372 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 16:11:18.047485] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:42.372 [2024-11-20 16:11:18.048686] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:42.372 [2024-11-20 16:11:18.050503] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:42.372 passed 00:16:42.372 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 16:11:18.127258] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:42.372 [2024-11-20 16:11:18.204169] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:42.372 [2024-11-20 16:11:18.228165] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:42.372 [2024-11-20 16:11:18.233239] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:42.372 passed 00:16:42.634 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 16:11:18.307431] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:42.634 [2024-11-20 16:11:18.308623] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:42.634 [2024-11-20 16:11:18.308641] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:42.634 [2024-11-20 16:11:18.310451] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:42.634 passed 00:16:42.634 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 16:11:18.387169] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:42.634 [2024-11-20 16:11:18.481165] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:42.634 [2024-11-20 16:11:18.489169] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:42.634 [2024-11-20 16:11:18.497167] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:42.634 [2024-11-20 16:11:18.505165] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:42.634 [2024-11-20 16:11:18.534293] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:42.634 passed 00:16:42.894 Test: admin_create_io_sq_verify_pc ...[2024-11-20 16:11:18.609491] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:42.894 [2024-11-20 16:11:18.628173] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:42.894 [2024-11-20 16:11:18.645666] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:42.894 passed 00:16:42.894 Test: admin_create_io_qp_max_qps ...[2024-11-20 16:11:18.724140] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:44.278 [2024-11-20 16:11:19.812167] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:16:44.278 [2024-11-20 16:11:20.201655] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:44.539 passed 00:16:44.539 Test: admin_create_io_sq_shared_cq ...[2024-11-20 16:11:20.276112] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:44.539 [2024-11-20 16:11:20.409170] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:44.539 [2024-11-20 16:11:20.446217] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:44.539 passed 00:16:44.539 00:16:44.539 Run Summary: Type Total Ran Passed Failed Inactive 00:16:44.539 suites 1 1 n/a 0 0 00:16:44.539 tests 18 18 18 0 0 00:16:44.539 asserts 360 360 360 0 n/a 00:16:44.539 00:16:44.539 Elapsed time = 1.507 seconds 00:16:44.801 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1242364 00:16:44.801 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1242364 ']' 00:16:44.801 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1242364 00:16:44.801 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:16:44.801 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.801 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1242364 00:16:44.801 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:44.801 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:44.801 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1242364' 00:16:44.801 killing process with pid 1242364 00:16:44.801 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1242364 00:16:44.801 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1242364 00:16:44.801 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:44.801 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:44.801 00:16:44.801 real 0m6.242s 00:16:44.801 user 0m17.684s 00:16:44.801 sys 0m0.549s 00:16:44.801 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.801 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:44.801 ************************************ 00:16:44.801 END TEST nvmf_vfio_user_nvme_compliance 00:16:44.801 ************************************ 00:16:44.801 16:11:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:44.801 16:11:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:44.801 16:11:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.801 16:11:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:45.063 ************************************ 00:16:45.063 START TEST nvmf_vfio_user_fuzz 00:16:45.063 ************************************ 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:45.063 * Looking for test storage... 00:16:45.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:45.063 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:45.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.063 --rc genhtml_branch_coverage=1 00:16:45.063 --rc genhtml_function_coverage=1 00:16:45.063 --rc genhtml_legend=1 00:16:45.064 --rc geninfo_all_blocks=1 00:16:45.064 --rc geninfo_unexecuted_blocks=1 00:16:45.064 00:16:45.064 ' 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:45.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.064 --rc genhtml_branch_coverage=1 00:16:45.064 --rc genhtml_function_coverage=1 00:16:45.064 --rc genhtml_legend=1 00:16:45.064 --rc geninfo_all_blocks=1 00:16:45.064 --rc geninfo_unexecuted_blocks=1 00:16:45.064 00:16:45.064 ' 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:45.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.064 --rc genhtml_branch_coverage=1 00:16:45.064 --rc genhtml_function_coverage=1 00:16:45.064 --rc genhtml_legend=1 00:16:45.064 --rc geninfo_all_blocks=1 00:16:45.064 --rc geninfo_unexecuted_blocks=1 00:16:45.064 00:16:45.064 ' 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:45.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.064 --rc genhtml_branch_coverage=1 00:16:45.064 --rc genhtml_function_coverage=1 00:16:45.064 --rc genhtml_legend=1 00:16:45.064 --rc geninfo_all_blocks=1 00:16:45.064 --rc geninfo_unexecuted_blocks=1 00:16:45.064 00:16:45.064 ' 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:45.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1243750 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1243750' 00:16:45.064 Process pid: 1243750 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1243750 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1243750 ']' 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.064 16:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:46.008 16:11:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.008 16:11:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:16:46.008 16:11:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:46.949 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:46.949 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.949 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:46.949 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.949 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:46.949 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:46.949 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.949 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:46.949 malloc0 00:16:46.949 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.949 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:46.949 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.949 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:47.209 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.209 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:47.209 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.209 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:47.209 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.209 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:47.209 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.209 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:47.209 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.209 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:47.209 16:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:19.335 Fuzzing completed. Shutting down the fuzz application 00:17:19.335 00:17:19.335 Dumping successful admin opcodes: 00:17:19.335 8, 9, 10, 24, 00:17:19.335 Dumping successful io opcodes: 00:17:19.335 0, 00:17:19.335 NS: 0x20000081ef00 I/O qp, Total commands completed: 1335883, total successful commands: 5236, random_seed: 1292596992 00:17:19.335 NS: 0x20000081ef00 admin qp, Total commands completed: 295234, total successful commands: 2383, random_seed: 1099256832 00:17:19.335 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:19.335 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.335 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:19.335 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.335 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1243750 00:17:19.335 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1243750 ']' 00:17:19.335 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1243750 00:17:19.335 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:17:19.335 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.335 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1243750 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1243750' 00:17:19.336 killing process with pid 1243750 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1243750 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1243750 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:19.336 00:17:19.336 real 0m32.793s 00:17:19.336 user 0m34.904s 00:17:19.336 sys 0m26.603s 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:19.336 ************************************ 00:17:19.336 END TEST nvmf_vfio_user_fuzz 00:17:19.336 ************************************ 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:19.336 ************************************ 00:17:19.336 START TEST nvmf_auth_target 00:17:19.336 ************************************ 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:19.336 * Looking for test storage... 00:17:19.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:19.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.336 --rc genhtml_branch_coverage=1 00:17:19.336 --rc genhtml_function_coverage=1 00:17:19.336 --rc genhtml_legend=1 00:17:19.336 --rc geninfo_all_blocks=1 00:17:19.336 --rc geninfo_unexecuted_blocks=1 00:17:19.336 00:17:19.336 ' 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:19.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.336 --rc genhtml_branch_coverage=1 00:17:19.336 --rc genhtml_function_coverage=1 00:17:19.336 --rc genhtml_legend=1 00:17:19.336 --rc geninfo_all_blocks=1 00:17:19.336 --rc geninfo_unexecuted_blocks=1 00:17:19.336 00:17:19.336 ' 00:17:19.336 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:19.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.336 --rc genhtml_branch_coverage=1 00:17:19.336 --rc genhtml_function_coverage=1 00:17:19.336 --rc genhtml_legend=1 00:17:19.336 --rc geninfo_all_blocks=1 00:17:19.336 --rc geninfo_unexecuted_blocks=1 00:17:19.336 00:17:19.336 ' 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:19.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.337 --rc genhtml_branch_coverage=1 00:17:19.337 --rc genhtml_function_coverage=1 00:17:19.337 --rc genhtml_legend=1 00:17:19.337 --rc geninfo_all_blocks=1 00:17:19.337 --rc geninfo_unexecuted_blocks=1 00:17:19.337 00:17:19.337 ' 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:19.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:19.337 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:19.338 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:19.338 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:19.338 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.338 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:19.338 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:19.338 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:19.338 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.338 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.338 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.338 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:19.338 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:19.338 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:19.338 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:25.928 16:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:25.928 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:25.928 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:25.928 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:25.928 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:25.928 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:25.928 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:25.928 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:25.928 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:25.928 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:25.928 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:25.928 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:25.928 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:25.928 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:25.928 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:25.928 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:25.928 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:25.928 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:25.928 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:25.929 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:25.929 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:25.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:25.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:17:25.929 00:17:25.929 --- 10.0.0.2 ping statistics --- 00:17:25.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.929 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:25.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:25.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:17:25.929 00:17:25.929 --- 10.0.0.1 ping statistics --- 00:17:25.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.929 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1253809 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1253809 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1253809 ']' 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.929 16:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1254110 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=36e8149f7af279a963fad8e8ced9b92f4cc5965fb6048e6f 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.gbN 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 36e8149f7af279a963fad8e8ced9b92f4cc5965fb6048e6f 0 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 36e8149f7af279a963fad8e8ced9b92f4cc5965fb6048e6f 0 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=36e8149f7af279a963fad8e8ced9b92f4cc5965fb6048e6f 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.gbN 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.gbN 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.gbN 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1522d185ee41dd9e06074bb51a137c48cc19592ae8b3e3317552a6a1f3425e77 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.alZ 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1522d185ee41dd9e06074bb51a137c48cc19592ae8b3e3317552a6a1f3425e77 3 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1522d185ee41dd9e06074bb51a137c48cc19592ae8b3e3317552a6a1f3425e77 3 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1522d185ee41dd9e06074bb51a137c48cc19592ae8b3e3317552a6a1f3425e77 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:26.501 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.alZ 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.alZ 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.alZ 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d01bd71645ccad07d540c061d2ea8236 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.uH0 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d01bd71645ccad07d540c061d2ea8236 1 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d01bd71645ccad07d540c061d2ea8236 1 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d01bd71645ccad07d540c061d2ea8236 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.uH0 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.uH0 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.uH0 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6ef777f94f929f503c21fb0d367fefed2ec0b974d16a4688 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.yOQ 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6ef777f94f929f503c21fb0d367fefed2ec0b974d16a4688 2 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6ef777f94f929f503c21fb0d367fefed2ec0b974d16a4688 2 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6ef777f94f929f503c21fb0d367fefed2ec0b974d16a4688 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.yOQ 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.yOQ 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.yOQ 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=aa24bedd884620a1abb501e954404174aa534a93bd70c62f 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Hjy 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key aa24bedd884620a1abb501e954404174aa534a93bd70c62f 2 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 aa24bedd884620a1abb501e954404174aa534a93bd70c62f 2 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:26.763 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=aa24bedd884620a1abb501e954404174aa534a93bd70c62f 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Hjy 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Hjy 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Hjy 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=860c1235272a65aeadd3c8ca4e2a52ef 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.GHm 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 860c1235272a65aeadd3c8ca4e2a52ef 1 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 860c1235272a65aeadd3c8ca4e2a52ef 1 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=860c1235272a65aeadd3c8ca4e2a52ef 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:26.764 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.GHm 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.GHm 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.GHm 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a337c6623e34ca5ac832fe30214f9a1e829ba6f8e3076c174c3dbde3f0723742 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.hIZ 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a337c6623e34ca5ac832fe30214f9a1e829ba6f8e3076c174c3dbde3f0723742 3 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a337c6623e34ca5ac832fe30214f9a1e829ba6f8e3076c174c3dbde3f0723742 3 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a337c6623e34ca5ac832fe30214f9a1e829ba6f8e3076c174c3dbde3f0723742 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.hIZ 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.hIZ 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.hIZ 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1253809 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1253809 ']' 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.025 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.286 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.286 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:27.286 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1254110 /var/tmp/host.sock 00:17:27.286 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1254110 ']' 00:17:27.286 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:27.286 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.286 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:27.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:27.286 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.286 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.286 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.286 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:27.286 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:27.286 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.286 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.547 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.547 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:27.547 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gbN 00:17:27.547 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.548 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.548 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.548 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.gbN 00:17:27.548 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.gbN 00:17:27.548 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.alZ ]] 00:17:27.548 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.alZ 00:17:27.548 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.548 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.548 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.548 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.alZ 00:17:27.548 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.alZ 00:17:27.808 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:27.808 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.uH0 00:17:27.808 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.808 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.808 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.808 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.uH0 00:17:27.808 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.uH0 00:17:28.069 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.yOQ ]] 00:17:28.069 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yOQ 00:17:28.069 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.069 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.069 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.069 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yOQ 00:17:28.069 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yOQ 00:17:28.069 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:28.069 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Hjy 00:17:28.069 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.069 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.069 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.069 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Hjy 00:17:28.069 16:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Hjy 00:17:28.331 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.GHm ]] 00:17:28.331 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GHm 00:17:28.331 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.331 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.331 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.331 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GHm 00:17:28.331 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GHm 00:17:28.590 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:28.590 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.hIZ 00:17:28.590 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.590 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.590 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.590 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.hIZ 00:17:28.590 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.hIZ 00:17:28.851 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:28.851 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:28.851 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.851 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.851 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:28.851 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:28.851 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:28.851 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.851 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:28.851 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:28.851 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:28.851 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.851 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.851 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.851 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.851 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.851 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.851 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.851 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.112 00:17:29.112 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.112 16:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.112 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.373 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.373 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.373 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.373 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.373 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.373 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.373 { 00:17:29.373 "cntlid": 1, 00:17:29.373 "qid": 0, 00:17:29.373 "state": "enabled", 00:17:29.373 "thread": "nvmf_tgt_poll_group_000", 00:17:29.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:29.373 "listen_address": { 00:17:29.373 "trtype": "TCP", 00:17:29.373 "adrfam": "IPv4", 00:17:29.373 "traddr": "10.0.0.2", 00:17:29.373 "trsvcid": "4420" 00:17:29.373 }, 00:17:29.373 "peer_address": { 00:17:29.373 "trtype": "TCP", 00:17:29.373 "adrfam": "IPv4", 00:17:29.373 "traddr": "10.0.0.1", 00:17:29.373 "trsvcid": "53584" 00:17:29.373 }, 00:17:29.373 "auth": { 00:17:29.373 "state": "completed", 00:17:29.373 "digest": "sha256", 00:17:29.373 "dhgroup": "null" 00:17:29.373 } 00:17:29.373 } 00:17:29.373 ]' 00:17:29.373 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.373 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:29.373 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.635 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:29.635 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.635 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.635 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.635 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.635 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:17:29.635 16:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:17:30.576 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.576 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.576 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.576 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.576 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.576 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.576 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:30.576 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:30.576 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:30.576 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.576 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:30.576 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:30.576 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:30.576 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.576 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.576 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.576 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.576 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.576 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.576 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.576 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.837 00:17:30.837 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.837 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.837 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.097 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.097 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.097 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.097 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.097 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.097 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.097 { 00:17:31.097 "cntlid": 3, 00:17:31.098 "qid": 0, 00:17:31.098 "state": "enabled", 00:17:31.098 "thread": "nvmf_tgt_poll_group_000", 00:17:31.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:31.098 "listen_address": { 00:17:31.098 "trtype": "TCP", 00:17:31.098 "adrfam": "IPv4", 00:17:31.098 "traddr": "10.0.0.2", 00:17:31.098 "trsvcid": "4420" 00:17:31.098 }, 00:17:31.098 "peer_address": { 00:17:31.098 "trtype": "TCP", 00:17:31.098 "adrfam": "IPv4", 00:17:31.098 "traddr": "10.0.0.1", 00:17:31.098 "trsvcid": "53624" 00:17:31.098 }, 00:17:31.098 "auth": { 00:17:31.098 "state": "completed", 00:17:31.098 "digest": "sha256", 00:17:31.098 "dhgroup": "null" 00:17:31.098 } 00:17:31.098 } 00:17:31.098 ]' 00:17:31.098 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.098 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.098 16:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.098 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:31.098 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.359 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.359 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.359 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.359 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:17:31.359 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:17:31.951 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.212 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.212 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.212 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.212 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.212 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.212 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:32.212 16:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:32.212 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:32.212 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.212 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:32.212 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:32.212 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:32.212 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.212 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.212 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.212 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.212 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.212 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.212 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.212 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.472 00:17:32.473 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.473 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.473 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.733 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.734 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.734 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.734 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.734 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.734 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.734 { 00:17:32.734 "cntlid": 5, 00:17:32.734 "qid": 0, 00:17:32.734 "state": "enabled", 00:17:32.734 "thread": "nvmf_tgt_poll_group_000", 00:17:32.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:32.734 "listen_address": { 00:17:32.734 "trtype": "TCP", 00:17:32.734 "adrfam": "IPv4", 00:17:32.734 "traddr": "10.0.0.2", 00:17:32.734 "trsvcid": "4420" 00:17:32.734 }, 00:17:32.734 "peer_address": { 00:17:32.734 "trtype": "TCP", 00:17:32.734 "adrfam": "IPv4", 00:17:32.734 "traddr": "10.0.0.1", 00:17:32.734 "trsvcid": "53644" 00:17:32.734 }, 00:17:32.734 "auth": { 00:17:32.734 "state": "completed", 00:17:32.734 "digest": "sha256", 00:17:32.734 "dhgroup": "null" 00:17:32.734 } 00:17:32.734 } 00:17:32.734 ]' 00:17:32.734 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.734 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:32.734 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.734 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:32.734 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.994 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.994 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.994 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.994 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:17:32.995 16:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:17:33.936 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.936 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:33.936 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.936 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.936 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.936 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.936 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:33.936 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:33.936 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:33.936 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.936 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:33.936 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:33.936 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:33.937 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.937 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:33.937 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.937 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.937 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.937 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:33.937 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.937 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.198 00:17:34.198 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.198 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.198 16:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.198 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.198 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.198 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.198 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.459 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.459 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.459 { 00:17:34.459 "cntlid": 7, 00:17:34.459 "qid": 0, 00:17:34.459 "state": "enabled", 00:17:34.459 "thread": "nvmf_tgt_poll_group_000", 00:17:34.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:34.459 "listen_address": { 00:17:34.459 "trtype": "TCP", 00:17:34.459 "adrfam": "IPv4", 00:17:34.459 "traddr": "10.0.0.2", 00:17:34.459 "trsvcid": "4420" 00:17:34.459 }, 00:17:34.459 "peer_address": { 00:17:34.459 "trtype": "TCP", 00:17:34.459 "adrfam": "IPv4", 00:17:34.459 "traddr": "10.0.0.1", 00:17:34.459 "trsvcid": "53672" 00:17:34.459 }, 00:17:34.459 "auth": { 00:17:34.459 "state": "completed", 00:17:34.459 "digest": "sha256", 00:17:34.459 "dhgroup": "null" 00:17:34.459 } 00:17:34.459 } 00:17:34.459 ]' 00:17:34.459 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.459 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:34.459 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.459 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:34.459 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.459 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.459 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.459 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.721 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:17:34.721 16:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:17:35.292 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.292 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.292 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.292 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.292 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.292 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.292 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.292 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:35.292 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:35.553 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:35.553 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.553 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:35.553 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:35.553 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:35.553 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.553 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.553 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.553 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.553 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.553 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.553 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.553 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.814 00:17:35.814 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.814 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.814 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.814 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.814 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.814 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.814 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.814 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.814 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.814 { 00:17:35.814 "cntlid": 9, 00:17:35.814 "qid": 0, 00:17:35.814 "state": "enabled", 00:17:35.814 "thread": "nvmf_tgt_poll_group_000", 00:17:35.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:35.814 "listen_address": { 00:17:35.814 "trtype": "TCP", 00:17:35.814 "adrfam": "IPv4", 00:17:35.814 "traddr": "10.0.0.2", 00:17:35.814 "trsvcid": "4420" 00:17:35.814 }, 00:17:35.814 "peer_address": { 00:17:35.814 "trtype": "TCP", 00:17:35.814 "adrfam": "IPv4", 00:17:35.814 "traddr": "10.0.0.1", 00:17:35.814 "trsvcid": "53704" 00:17:35.814 }, 00:17:35.814 "auth": { 00:17:35.814 "state": "completed", 00:17:35.814 "digest": "sha256", 00:17:35.814 "dhgroup": "ffdhe2048" 00:17:35.814 } 00:17:35.814 } 00:17:35.814 ]' 00:17:35.814 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.075 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:36.075 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.075 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:36.075 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.075 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.075 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.075 16:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.336 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:17:36.336 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:17:36.907 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.907 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.907 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.907 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.907 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.907 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.907 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:36.907 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:37.168 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:37.168 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.168 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:37.168 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:37.168 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:37.168 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.168 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.168 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.168 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.168 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.168 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.168 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.168 16:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.431 00:17:37.431 16:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.431 16:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.431 16:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.431 16:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.431 16:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.431 16:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.431 16:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.431 16:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.431 16:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.431 { 00:17:37.431 "cntlid": 11, 00:17:37.431 "qid": 0, 00:17:37.431 "state": "enabled", 00:17:37.431 "thread": "nvmf_tgt_poll_group_000", 00:17:37.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:37.431 "listen_address": { 00:17:37.431 "trtype": "TCP", 00:17:37.431 "adrfam": "IPv4", 00:17:37.431 "traddr": "10.0.0.2", 00:17:37.431 "trsvcid": "4420" 00:17:37.431 }, 00:17:37.431 "peer_address": { 00:17:37.431 "trtype": "TCP", 00:17:37.431 "adrfam": "IPv4", 00:17:37.431 "traddr": "10.0.0.1", 00:17:37.431 "trsvcid": "53732" 00:17:37.431 }, 00:17:37.431 "auth": { 00:17:37.431 "state": "completed", 00:17:37.431 "digest": "sha256", 00:17:37.431 "dhgroup": "ffdhe2048" 00:17:37.431 } 00:17:37.431 } 00:17:37.431 ]' 00:17:37.431 16:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.692 16:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:37.692 16:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.692 16:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:37.692 16:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.692 16:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.692 16:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.692 16:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.954 16:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:17:37.954 16:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:17:38.527 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.527 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.527 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.527 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.527 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.527 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.527 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:38.527 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:38.788 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:38.788 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.788 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:38.788 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:38.788 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:38.788 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.788 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.788 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.788 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.788 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.788 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.788 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.788 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.049 00:17:39.049 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.049 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.049 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.049 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.049 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.049 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.049 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.049 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.049 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.049 { 00:17:39.049 "cntlid": 13, 00:17:39.049 "qid": 0, 00:17:39.049 "state": "enabled", 00:17:39.049 "thread": "nvmf_tgt_poll_group_000", 00:17:39.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:39.049 "listen_address": { 00:17:39.049 "trtype": "TCP", 00:17:39.049 "adrfam": "IPv4", 00:17:39.049 "traddr": "10.0.0.2", 00:17:39.049 "trsvcid": "4420" 00:17:39.049 }, 00:17:39.049 "peer_address": { 00:17:39.049 "trtype": "TCP", 00:17:39.049 "adrfam": "IPv4", 00:17:39.049 "traddr": "10.0.0.1", 00:17:39.049 "trsvcid": "46488" 00:17:39.049 }, 00:17:39.049 "auth": { 00:17:39.049 "state": "completed", 00:17:39.049 "digest": "sha256", 00:17:39.049 "dhgroup": "ffdhe2048" 00:17:39.049 } 00:17:39.049 } 00:17:39.049 ]' 00:17:39.049 16:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.310 16:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.310 16:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.310 16:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:39.310 16:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.310 16:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.310 16:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.310 16:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.570 16:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:17:39.570 16:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:17:40.149 16:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.149 16:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.149 16:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.149 16:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.149 16:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.149 16:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.149 16:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:40.149 16:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:40.516 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:40.516 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.516 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:40.516 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:40.516 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:40.516 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.516 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:40.516 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.516 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.516 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.516 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:40.516 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.516 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.516 00:17:40.516 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.516 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.516 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.789 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.789 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.789 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.789 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.789 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.789 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.789 { 00:17:40.789 "cntlid": 15, 00:17:40.789 "qid": 0, 00:17:40.789 "state": "enabled", 00:17:40.789 "thread": "nvmf_tgt_poll_group_000", 00:17:40.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:40.789 "listen_address": { 00:17:40.789 "trtype": "TCP", 00:17:40.789 "adrfam": "IPv4", 00:17:40.789 "traddr": "10.0.0.2", 00:17:40.789 "trsvcid": "4420" 00:17:40.789 }, 00:17:40.789 "peer_address": { 00:17:40.789 "trtype": "TCP", 00:17:40.789 "adrfam": "IPv4", 00:17:40.789 "traddr": "10.0.0.1", 00:17:40.789 "trsvcid": "46520" 00:17:40.789 }, 00:17:40.789 "auth": { 00:17:40.789 "state": "completed", 00:17:40.789 "digest": "sha256", 00:17:40.789 "dhgroup": "ffdhe2048" 00:17:40.789 } 00:17:40.789 } 00:17:40.789 ]' 00:17:40.789 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.789 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.789 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.789 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:40.789 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.789 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.789 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.789 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.050 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:17:41.050 16:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:17:41.622 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.622 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.622 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.622 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.622 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.622 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.622 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.622 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:41.622 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:41.883 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:41.883 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.883 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:41.883 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:41.883 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:41.883 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.883 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.883 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.883 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.883 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.883 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.883 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.883 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.144 00:17:42.144 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.144 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.144 16:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.406 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.406 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.406 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.406 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.406 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.406 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.406 { 00:17:42.406 "cntlid": 17, 00:17:42.407 "qid": 0, 00:17:42.407 "state": "enabled", 00:17:42.407 "thread": "nvmf_tgt_poll_group_000", 00:17:42.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:42.407 "listen_address": { 00:17:42.407 "trtype": "TCP", 00:17:42.407 "adrfam": "IPv4", 00:17:42.407 "traddr": "10.0.0.2", 00:17:42.407 "trsvcid": "4420" 00:17:42.407 }, 00:17:42.407 "peer_address": { 00:17:42.407 "trtype": "TCP", 00:17:42.407 "adrfam": "IPv4", 00:17:42.407 "traddr": "10.0.0.1", 00:17:42.407 "trsvcid": "46554" 00:17:42.407 }, 00:17:42.407 "auth": { 00:17:42.407 "state": "completed", 00:17:42.407 "digest": "sha256", 00:17:42.407 "dhgroup": "ffdhe3072" 00:17:42.407 } 00:17:42.407 } 00:17:42.407 ]' 00:17:42.407 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.407 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.407 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.407 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:42.407 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.407 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.407 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.407 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.668 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:17:42.668 16:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:17:43.240 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.240 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.240 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.240 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.240 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.240 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.240 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:43.240 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:43.500 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:43.500 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.500 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:43.500 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:43.500 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:43.500 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.500 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.500 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.500 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.500 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.500 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.500 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.500 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.760 00:17:43.760 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.760 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.760 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.021 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.021 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.021 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.021 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.021 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.021 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.021 { 00:17:44.021 "cntlid": 19, 00:17:44.021 "qid": 0, 00:17:44.021 "state": "enabled", 00:17:44.021 "thread": "nvmf_tgt_poll_group_000", 00:17:44.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:44.021 "listen_address": { 00:17:44.021 "trtype": "TCP", 00:17:44.021 "adrfam": "IPv4", 00:17:44.021 "traddr": "10.0.0.2", 00:17:44.021 "trsvcid": "4420" 00:17:44.021 }, 00:17:44.021 "peer_address": { 00:17:44.021 "trtype": "TCP", 00:17:44.021 "adrfam": "IPv4", 00:17:44.021 "traddr": "10.0.0.1", 00:17:44.021 "trsvcid": "46594" 00:17:44.021 }, 00:17:44.021 "auth": { 00:17:44.021 "state": "completed", 00:17:44.021 "digest": "sha256", 00:17:44.021 "dhgroup": "ffdhe3072" 00:17:44.021 } 00:17:44.021 } 00:17:44.021 ]' 00:17:44.021 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.021 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.021 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.021 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:44.021 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.021 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.021 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.021 16:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.282 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:17:44.282 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:17:44.854 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.854 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.854 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.854 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.854 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.854 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.854 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:44.854 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:45.115 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:45.115 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.115 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:45.115 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:45.115 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:45.115 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.115 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.115 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.115 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.115 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.115 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.115 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.115 16:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.377 00:17:45.377 16:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.377 16:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.377 16:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.638 16:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.638 16:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.638 16:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.638 16:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.638 16:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.638 16:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.638 { 00:17:45.638 "cntlid": 21, 00:17:45.638 "qid": 0, 00:17:45.638 "state": "enabled", 00:17:45.638 "thread": "nvmf_tgt_poll_group_000", 00:17:45.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:45.638 "listen_address": { 00:17:45.638 "trtype": "TCP", 00:17:45.638 "adrfam": "IPv4", 00:17:45.638 "traddr": "10.0.0.2", 00:17:45.638 "trsvcid": "4420" 00:17:45.638 }, 00:17:45.638 "peer_address": { 00:17:45.638 "trtype": "TCP", 00:17:45.638 "adrfam": "IPv4", 00:17:45.638 "traddr": "10.0.0.1", 00:17:45.638 "trsvcid": "46628" 00:17:45.638 }, 00:17:45.638 "auth": { 00:17:45.638 "state": "completed", 00:17:45.638 "digest": "sha256", 00:17:45.638 "dhgroup": "ffdhe3072" 00:17:45.638 } 00:17:45.638 } 00:17:45.638 ]' 00:17:45.638 16:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.638 16:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:45.638 16:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.638 16:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:45.638 16:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.638 16:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.638 16:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.638 16:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.901 16:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:17:45.901 16:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:17:46.473 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.473 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:46.473 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.473 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.473 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.473 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.473 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:46.473 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:46.734 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:46.734 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.734 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:46.734 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:46.734 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:46.734 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.734 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:46.734 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.734 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.734 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.734 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:46.734 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:46.734 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:46.996 00:17:46.996 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.996 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.996 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.257 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.257 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.257 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.257 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.257 16:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.257 16:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.257 { 00:17:47.257 "cntlid": 23, 00:17:47.257 "qid": 0, 00:17:47.257 "state": "enabled", 00:17:47.257 "thread": "nvmf_tgt_poll_group_000", 00:17:47.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:47.257 "listen_address": { 00:17:47.257 "trtype": "TCP", 00:17:47.257 "adrfam": "IPv4", 00:17:47.257 "traddr": "10.0.0.2", 00:17:47.257 "trsvcid": "4420" 00:17:47.257 }, 00:17:47.257 "peer_address": { 00:17:47.257 "trtype": "TCP", 00:17:47.257 "adrfam": "IPv4", 00:17:47.257 "traddr": "10.0.0.1", 00:17:47.257 "trsvcid": "46662" 00:17:47.257 }, 00:17:47.257 "auth": { 00:17:47.257 "state": "completed", 00:17:47.257 "digest": "sha256", 00:17:47.257 "dhgroup": "ffdhe3072" 00:17:47.257 } 00:17:47.257 } 00:17:47.257 ]' 00:17:47.257 16:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.257 16:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:47.257 16:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.257 16:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:47.257 16:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.257 16:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.257 16:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.257 16:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.518 16:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:17:47.518 16:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:17:48.162 16:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.162 16:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.162 16:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.162 16:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.162 16:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.162 16:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.162 16:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.162 16:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:48.162 16:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:48.422 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:48.422 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.422 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:48.422 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:48.422 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:48.422 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.422 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.422 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.422 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.422 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.422 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.422 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.422 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.682 00:17:48.683 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.683 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.683 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.683 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.943 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.943 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.943 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.943 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.943 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.943 { 00:17:48.943 "cntlid": 25, 00:17:48.943 "qid": 0, 00:17:48.943 "state": "enabled", 00:17:48.943 "thread": "nvmf_tgt_poll_group_000", 00:17:48.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:48.943 "listen_address": { 00:17:48.943 "trtype": "TCP", 00:17:48.943 "adrfam": "IPv4", 00:17:48.943 "traddr": "10.0.0.2", 00:17:48.943 "trsvcid": "4420" 00:17:48.943 }, 00:17:48.943 "peer_address": { 00:17:48.943 "trtype": "TCP", 00:17:48.943 "adrfam": "IPv4", 00:17:48.943 "traddr": "10.0.0.1", 00:17:48.943 "trsvcid": "46682" 00:17:48.943 }, 00:17:48.943 "auth": { 00:17:48.943 "state": "completed", 00:17:48.943 "digest": "sha256", 00:17:48.943 "dhgroup": "ffdhe4096" 00:17:48.943 } 00:17:48.943 } 00:17:48.943 ]' 00:17:48.943 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.943 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.943 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.943 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:48.943 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.943 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.943 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.944 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.204 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:17:49.204 16:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:17:49.776 16:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.776 16:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.776 16:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.776 16:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.776 16:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.776 16:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.776 16:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:49.776 16:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:50.037 16:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:50.037 16:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.037 16:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:50.037 16:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:50.037 16:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:50.037 16:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.037 16:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.037 16:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.037 16:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.037 16:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.037 16:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.037 16:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.037 16:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.298 00:17:50.298 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.298 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.298 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.558 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.558 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.558 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.558 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.558 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.558 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.558 { 00:17:50.558 "cntlid": 27, 00:17:50.558 "qid": 0, 00:17:50.558 "state": "enabled", 00:17:50.558 "thread": "nvmf_tgt_poll_group_000", 00:17:50.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:50.558 "listen_address": { 00:17:50.558 "trtype": "TCP", 00:17:50.558 "adrfam": "IPv4", 00:17:50.558 "traddr": "10.0.0.2", 00:17:50.558 "trsvcid": "4420" 00:17:50.558 }, 00:17:50.558 "peer_address": { 00:17:50.558 "trtype": "TCP", 00:17:50.558 "adrfam": "IPv4", 00:17:50.558 "traddr": "10.0.0.1", 00:17:50.558 "trsvcid": "45370" 00:17:50.558 }, 00:17:50.558 "auth": { 00:17:50.558 "state": "completed", 00:17:50.558 "digest": "sha256", 00:17:50.558 "dhgroup": "ffdhe4096" 00:17:50.558 } 00:17:50.558 } 00:17:50.558 ]' 00:17:50.558 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.558 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.558 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.558 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:50.558 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.558 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.558 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.558 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.817 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:17:50.817 16:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:17:51.387 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.387 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.387 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.387 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.388 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.388 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.388 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:51.388 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:51.648 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:51.648 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.648 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:51.648 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:51.648 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:51.648 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.648 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.648 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.648 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.648 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.648 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.648 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.648 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.908 00:17:51.908 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.908 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.908 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.170 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.170 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.170 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.170 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.170 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.170 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.170 { 00:17:52.170 "cntlid": 29, 00:17:52.170 "qid": 0, 00:17:52.170 "state": "enabled", 00:17:52.170 "thread": "nvmf_tgt_poll_group_000", 00:17:52.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:52.170 "listen_address": { 00:17:52.170 "trtype": "TCP", 00:17:52.170 "adrfam": "IPv4", 00:17:52.170 "traddr": "10.0.0.2", 00:17:52.170 "trsvcid": "4420" 00:17:52.170 }, 00:17:52.170 "peer_address": { 00:17:52.170 "trtype": "TCP", 00:17:52.170 "adrfam": "IPv4", 00:17:52.170 "traddr": "10.0.0.1", 00:17:52.170 "trsvcid": "45396" 00:17:52.170 }, 00:17:52.170 "auth": { 00:17:52.170 "state": "completed", 00:17:52.170 "digest": "sha256", 00:17:52.170 "dhgroup": "ffdhe4096" 00:17:52.170 } 00:17:52.170 } 00:17:52.170 ]' 00:17:52.170 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.170 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.170 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.170 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:52.170 16:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.170 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.170 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.170 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.430 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:17:52.430 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:17:53.000 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.000 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.000 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.000 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.000 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.000 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.000 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:53.000 16:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:53.260 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:53.260 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.261 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:53.261 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:53.261 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:53.261 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.261 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:53.261 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.261 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.261 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.261 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:53.261 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.261 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.521 00:17:53.521 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.521 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.521 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.781 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.781 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.781 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.781 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.781 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.781 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.781 { 00:17:53.781 "cntlid": 31, 00:17:53.781 "qid": 0, 00:17:53.781 "state": "enabled", 00:17:53.781 "thread": "nvmf_tgt_poll_group_000", 00:17:53.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:53.781 "listen_address": { 00:17:53.782 "trtype": "TCP", 00:17:53.782 "adrfam": "IPv4", 00:17:53.782 "traddr": "10.0.0.2", 00:17:53.782 "trsvcid": "4420" 00:17:53.782 }, 00:17:53.782 "peer_address": { 00:17:53.782 "trtype": "TCP", 00:17:53.782 "adrfam": "IPv4", 00:17:53.782 "traddr": "10.0.0.1", 00:17:53.782 "trsvcid": "45426" 00:17:53.782 }, 00:17:53.782 "auth": { 00:17:53.782 "state": "completed", 00:17:53.782 "digest": "sha256", 00:17:53.782 "dhgroup": "ffdhe4096" 00:17:53.782 } 00:17:53.782 } 00:17:53.782 ]' 00:17:53.782 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.782 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.782 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.782 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:53.782 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.782 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.782 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.782 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.042 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:17:54.042 16:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:17:54.613 16:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.613 16:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.613 16:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.613 16:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.613 16:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.613 16:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.613 16:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.613 16:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:54.613 16:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:54.873 16:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:54.873 16:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.873 16:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:54.873 16:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:54.873 16:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:54.873 16:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.873 16:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.873 16:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.873 16:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.873 16:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.873 16:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.873 16:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.873 16:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.132 00:17:55.132 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.132 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.132 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.392 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.392 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.392 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.392 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.392 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.392 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.392 { 00:17:55.392 "cntlid": 33, 00:17:55.392 "qid": 0, 00:17:55.392 "state": "enabled", 00:17:55.392 "thread": "nvmf_tgt_poll_group_000", 00:17:55.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:55.392 "listen_address": { 00:17:55.392 "trtype": "TCP", 00:17:55.392 "adrfam": "IPv4", 00:17:55.392 "traddr": "10.0.0.2", 00:17:55.392 "trsvcid": "4420" 00:17:55.392 }, 00:17:55.392 "peer_address": { 00:17:55.392 "trtype": "TCP", 00:17:55.392 "adrfam": "IPv4", 00:17:55.392 "traddr": "10.0.0.1", 00:17:55.392 "trsvcid": "45444" 00:17:55.392 }, 00:17:55.392 "auth": { 00:17:55.392 "state": "completed", 00:17:55.392 "digest": "sha256", 00:17:55.392 "dhgroup": "ffdhe6144" 00:17:55.392 } 00:17:55.392 } 00:17:55.392 ]' 00:17:55.392 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.392 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.392 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.652 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:55.652 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.652 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.652 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.652 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.652 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:17:55.652 16:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:17:56.597 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.597 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:56.597 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.597 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.597 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.597 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.597 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:56.597 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:56.597 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:56.597 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.597 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:56.597 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:56.597 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:56.597 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.597 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.597 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.597 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.597 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.597 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.597 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.597 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.857 00:17:56.857 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.857 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.857 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.117 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.117 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.117 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.117 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.117 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.117 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.117 { 00:17:57.117 "cntlid": 35, 00:17:57.117 "qid": 0, 00:17:57.117 "state": "enabled", 00:17:57.117 "thread": "nvmf_tgt_poll_group_000", 00:17:57.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:57.117 "listen_address": { 00:17:57.117 "trtype": "TCP", 00:17:57.117 "adrfam": "IPv4", 00:17:57.117 "traddr": "10.0.0.2", 00:17:57.117 "trsvcid": "4420" 00:17:57.117 }, 00:17:57.117 "peer_address": { 00:17:57.117 "trtype": "TCP", 00:17:57.117 "adrfam": "IPv4", 00:17:57.117 "traddr": "10.0.0.1", 00:17:57.117 "trsvcid": "45468" 00:17:57.117 }, 00:17:57.117 "auth": { 00:17:57.117 "state": "completed", 00:17:57.117 "digest": "sha256", 00:17:57.117 "dhgroup": "ffdhe6144" 00:17:57.117 } 00:17:57.117 } 00:17:57.117 ]' 00:17:57.117 16:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.117 16:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.117 16:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.377 16:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:57.377 16:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.377 16:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.377 16:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.377 16:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.637 16:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:17:57.637 16:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:17:58.209 16:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.209 16:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.209 16:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.209 16:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.209 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.209 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.209 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:58.209 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:58.470 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:58.470 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.470 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:58.470 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:58.470 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:58.470 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.470 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.470 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.470 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.470 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.470 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.470 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.470 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.731 00:17:58.731 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.731 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.731 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.994 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.994 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.994 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.994 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.994 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.994 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.994 { 00:17:58.994 "cntlid": 37, 00:17:58.994 "qid": 0, 00:17:58.994 "state": "enabled", 00:17:58.994 "thread": "nvmf_tgt_poll_group_000", 00:17:58.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:58.994 "listen_address": { 00:17:58.994 "trtype": "TCP", 00:17:58.994 "adrfam": "IPv4", 00:17:58.994 "traddr": "10.0.0.2", 00:17:58.994 "trsvcid": "4420" 00:17:58.994 }, 00:17:58.994 "peer_address": { 00:17:58.994 "trtype": "TCP", 00:17:58.994 "adrfam": "IPv4", 00:17:58.994 "traddr": "10.0.0.1", 00:17:58.994 "trsvcid": "45488" 00:17:58.994 }, 00:17:58.994 "auth": { 00:17:58.994 "state": "completed", 00:17:58.994 "digest": "sha256", 00:17:58.994 "dhgroup": "ffdhe6144" 00:17:58.994 } 00:17:58.994 } 00:17:58.994 ]' 00:17:58.994 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.994 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.994 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.994 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:58.994 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.994 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.994 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.994 16:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.256 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:17:59.256 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:17:59.826 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.826 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.826 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.826 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.088 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.088 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.088 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:00.088 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:00.088 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:18:00.088 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.088 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:00.088 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:00.088 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:00.088 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.088 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:00.088 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.088 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.088 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.088 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:00.088 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.088 16:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.348 00:18:00.609 16:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.609 16:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.609 16:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.609 16:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.609 16:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.609 16:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.609 16:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.609 16:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.609 16:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.609 { 00:18:00.609 "cntlid": 39, 00:18:00.609 "qid": 0, 00:18:00.609 "state": "enabled", 00:18:00.609 "thread": "nvmf_tgt_poll_group_000", 00:18:00.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:00.609 "listen_address": { 00:18:00.609 "trtype": "TCP", 00:18:00.609 "adrfam": "IPv4", 00:18:00.609 "traddr": "10.0.0.2", 00:18:00.609 "trsvcid": "4420" 00:18:00.609 }, 00:18:00.609 "peer_address": { 00:18:00.609 "trtype": "TCP", 00:18:00.609 "adrfam": "IPv4", 00:18:00.609 "traddr": "10.0.0.1", 00:18:00.609 "trsvcid": "57756" 00:18:00.609 }, 00:18:00.609 "auth": { 00:18:00.609 "state": "completed", 00:18:00.609 "digest": "sha256", 00:18:00.609 "dhgroup": "ffdhe6144" 00:18:00.609 } 00:18:00.609 } 00:18:00.609 ]' 00:18:00.609 16:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.870 16:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.870 16:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.870 16:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:00.870 16:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.870 16:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.870 16:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.870 16:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.870 16:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:18:00.870 16:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:18:01.814 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.814 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.814 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.814 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.814 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.814 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.814 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.814 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:01.814 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:01.814 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:18:01.814 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.814 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:01.814 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:01.814 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:01.814 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.814 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.814 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.814 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.814 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.814 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.814 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.814 16:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.386 00:18:02.386 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.386 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.386 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.386 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.386 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.386 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.386 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.386 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.386 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.386 { 00:18:02.386 "cntlid": 41, 00:18:02.386 "qid": 0, 00:18:02.386 "state": "enabled", 00:18:02.386 "thread": "nvmf_tgt_poll_group_000", 00:18:02.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:02.386 "listen_address": { 00:18:02.386 "trtype": "TCP", 00:18:02.386 "adrfam": "IPv4", 00:18:02.386 "traddr": "10.0.0.2", 00:18:02.386 "trsvcid": "4420" 00:18:02.386 }, 00:18:02.386 "peer_address": { 00:18:02.386 "trtype": "TCP", 00:18:02.386 "adrfam": "IPv4", 00:18:02.386 "traddr": "10.0.0.1", 00:18:02.386 "trsvcid": "57770" 00:18:02.386 }, 00:18:02.386 "auth": { 00:18:02.386 "state": "completed", 00:18:02.386 "digest": "sha256", 00:18:02.386 "dhgroup": "ffdhe8192" 00:18:02.386 } 00:18:02.386 } 00:18:02.386 ]' 00:18:02.386 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.646 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.646 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.646 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:02.646 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.646 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.646 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.646 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.907 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:18:02.907 16:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:18:03.477 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.477 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.477 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.477 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.477 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.477 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.477 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:03.478 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:03.738 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:18:03.738 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.738 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:03.738 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:03.738 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:03.739 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.739 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.739 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.739 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.739 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.739 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.739 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.739 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.999 00:18:04.261 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.261 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.261 16:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.261 16:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.261 16:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.261 16:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.261 16:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.261 16:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.261 16:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.261 { 00:18:04.261 "cntlid": 43, 00:18:04.261 "qid": 0, 00:18:04.261 "state": "enabled", 00:18:04.261 "thread": "nvmf_tgt_poll_group_000", 00:18:04.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.261 "listen_address": { 00:18:04.261 "trtype": "TCP", 00:18:04.261 "adrfam": "IPv4", 00:18:04.261 "traddr": "10.0.0.2", 00:18:04.261 "trsvcid": "4420" 00:18:04.261 }, 00:18:04.261 "peer_address": { 00:18:04.261 "trtype": "TCP", 00:18:04.261 "adrfam": "IPv4", 00:18:04.261 "traddr": "10.0.0.1", 00:18:04.261 "trsvcid": "57812" 00:18:04.261 }, 00:18:04.261 "auth": { 00:18:04.261 "state": "completed", 00:18:04.261 "digest": "sha256", 00:18:04.261 "dhgroup": "ffdhe8192" 00:18:04.261 } 00:18:04.261 } 00:18:04.261 ]' 00:18:04.261 16:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.261 16:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.261 16:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.522 16:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:04.522 16:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.522 16:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.522 16:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.522 16:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.522 16:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:18:04.522 16:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:18:05.465 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.465 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.465 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.465 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.465 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.465 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.465 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:05.466 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:05.466 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:18:05.466 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.466 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:05.466 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:05.466 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:05.466 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.466 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.466 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.466 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.466 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.466 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.466 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.466 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.038 00:18:06.038 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.038 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.038 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.300 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.300 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.300 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.300 16:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.300 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.300 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.300 { 00:18:06.300 "cntlid": 45, 00:18:06.300 "qid": 0, 00:18:06.300 "state": "enabled", 00:18:06.300 "thread": "nvmf_tgt_poll_group_000", 00:18:06.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:06.300 "listen_address": { 00:18:06.300 "trtype": "TCP", 00:18:06.300 "adrfam": "IPv4", 00:18:06.300 "traddr": "10.0.0.2", 00:18:06.300 "trsvcid": "4420" 00:18:06.300 }, 00:18:06.300 "peer_address": { 00:18:06.300 "trtype": "TCP", 00:18:06.300 "adrfam": "IPv4", 00:18:06.300 "traddr": "10.0.0.1", 00:18:06.300 "trsvcid": "57828" 00:18:06.300 }, 00:18:06.300 "auth": { 00:18:06.300 "state": "completed", 00:18:06.300 "digest": "sha256", 00:18:06.300 "dhgroup": "ffdhe8192" 00:18:06.300 } 00:18:06.300 } 00:18:06.300 ]' 00:18:06.300 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.300 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.300 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.300 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:06.300 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.300 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.300 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.300 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.561 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:18:06.561 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:18:07.133 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.133 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.133 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.133 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.133 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.133 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.133 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:07.133 16:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:07.393 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:07.393 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.393 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:07.393 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:07.393 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:07.393 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.393 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:07.393 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.393 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.393 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.393 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:07.393 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.393 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.964 00:18:07.964 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.964 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.964 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.964 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.964 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.964 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.964 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.964 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.964 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.964 { 00:18:07.964 "cntlid": 47, 00:18:07.964 "qid": 0, 00:18:07.964 "state": "enabled", 00:18:07.964 "thread": "nvmf_tgt_poll_group_000", 00:18:07.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:07.964 "listen_address": { 00:18:07.964 "trtype": "TCP", 00:18:07.964 "adrfam": "IPv4", 00:18:07.964 "traddr": "10.0.0.2", 00:18:07.964 "trsvcid": "4420" 00:18:07.964 }, 00:18:07.964 "peer_address": { 00:18:07.964 "trtype": "TCP", 00:18:07.964 "adrfam": "IPv4", 00:18:07.964 "traddr": "10.0.0.1", 00:18:07.964 "trsvcid": "57862" 00:18:07.964 }, 00:18:07.964 "auth": { 00:18:07.964 "state": "completed", 00:18:07.964 "digest": "sha256", 00:18:07.964 "dhgroup": "ffdhe8192" 00:18:07.965 } 00:18:07.965 } 00:18:07.965 ]' 00:18:07.965 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.225 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.225 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.225 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:08.225 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.225 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.225 16:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.225 16:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.486 16:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:18:08.486 16:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:18:09.058 16:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.058 16:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.058 16:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.058 16:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.058 16:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.058 16:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:09.058 16:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.058 16:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.058 16:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:09.058 16:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:09.318 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:09.318 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.318 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:09.318 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:09.318 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:09.318 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.319 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.319 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.319 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.319 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.319 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.319 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.319 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.580 00:18:09.580 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.580 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.580 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.580 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.580 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.580 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.580 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.580 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.580 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.580 { 00:18:09.580 "cntlid": 49, 00:18:09.580 "qid": 0, 00:18:09.580 "state": "enabled", 00:18:09.580 "thread": "nvmf_tgt_poll_group_000", 00:18:09.580 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:09.580 "listen_address": { 00:18:09.580 "trtype": "TCP", 00:18:09.580 "adrfam": "IPv4", 00:18:09.580 "traddr": "10.0.0.2", 00:18:09.580 "trsvcid": "4420" 00:18:09.580 }, 00:18:09.580 "peer_address": { 00:18:09.580 "trtype": "TCP", 00:18:09.580 "adrfam": "IPv4", 00:18:09.580 "traddr": "10.0.0.1", 00:18:09.580 "trsvcid": "54400" 00:18:09.580 }, 00:18:09.580 "auth": { 00:18:09.580 "state": "completed", 00:18:09.580 "digest": "sha384", 00:18:09.580 "dhgroup": "null" 00:18:09.580 } 00:18:09.580 } 00:18:09.580 ]' 00:18:09.580 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.841 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.841 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.841 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:09.841 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.841 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.841 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.841 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.101 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:18:10.101 16:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:18:10.671 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.671 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.671 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.671 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.671 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.671 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.671 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:10.671 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:10.932 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:10.932 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.932 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:10.932 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:10.932 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:10.932 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.932 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.932 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.932 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.932 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.932 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.932 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.933 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.194 00:18:11.194 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.194 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.194 16:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.194 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.194 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.194 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.194 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.194 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.194 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.194 { 00:18:11.194 "cntlid": 51, 00:18:11.194 "qid": 0, 00:18:11.194 "state": "enabled", 00:18:11.194 "thread": "nvmf_tgt_poll_group_000", 00:18:11.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:11.194 "listen_address": { 00:18:11.194 "trtype": "TCP", 00:18:11.194 "adrfam": "IPv4", 00:18:11.194 "traddr": "10.0.0.2", 00:18:11.194 "trsvcid": "4420" 00:18:11.194 }, 00:18:11.194 "peer_address": { 00:18:11.194 "trtype": "TCP", 00:18:11.194 "adrfam": "IPv4", 00:18:11.194 "traddr": "10.0.0.1", 00:18:11.194 "trsvcid": "54418" 00:18:11.194 }, 00:18:11.194 "auth": { 00:18:11.194 "state": "completed", 00:18:11.194 "digest": "sha384", 00:18:11.194 "dhgroup": "null" 00:18:11.194 } 00:18:11.194 } 00:18:11.194 ]' 00:18:11.194 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.455 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.455 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.455 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:11.455 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.455 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.455 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.455 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.715 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:18:11.715 16:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:18:12.287 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.287 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.287 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.287 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.287 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.287 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.287 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:12.287 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:12.547 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:12.547 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.547 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:12.547 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:12.547 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:12.547 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.547 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.547 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.547 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.547 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.547 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.547 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.547 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.808 00:18:12.808 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.808 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.808 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.808 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.808 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.808 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.808 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.069 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.069 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.069 { 00:18:13.069 "cntlid": 53, 00:18:13.069 "qid": 0, 00:18:13.069 "state": "enabled", 00:18:13.069 "thread": "nvmf_tgt_poll_group_000", 00:18:13.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:13.069 "listen_address": { 00:18:13.069 "trtype": "TCP", 00:18:13.069 "adrfam": "IPv4", 00:18:13.069 "traddr": "10.0.0.2", 00:18:13.069 "trsvcid": "4420" 00:18:13.069 }, 00:18:13.069 "peer_address": { 00:18:13.069 "trtype": "TCP", 00:18:13.069 "adrfam": "IPv4", 00:18:13.069 "traddr": "10.0.0.1", 00:18:13.069 "trsvcid": "54448" 00:18:13.069 }, 00:18:13.069 "auth": { 00:18:13.069 "state": "completed", 00:18:13.069 "digest": "sha384", 00:18:13.069 "dhgroup": "null" 00:18:13.069 } 00:18:13.069 } 00:18:13.069 ]' 00:18:13.069 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.069 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:13.069 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.069 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:13.069 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.069 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.069 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.069 16:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.330 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:18:13.330 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:18:13.900 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.900 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.900 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.900 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.900 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.900 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.900 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:13.900 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:14.162 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:14.162 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.162 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:14.162 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:14.162 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:14.162 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.162 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:14.162 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.162 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.162 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.162 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:14.162 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.162 16:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.424 00:18:14.424 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.424 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.424 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.424 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.424 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.424 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.424 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.424 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.424 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.424 { 00:18:14.424 "cntlid": 55, 00:18:14.424 "qid": 0, 00:18:14.424 "state": "enabled", 00:18:14.424 "thread": "nvmf_tgt_poll_group_000", 00:18:14.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:14.424 "listen_address": { 00:18:14.424 "trtype": "TCP", 00:18:14.424 "adrfam": "IPv4", 00:18:14.424 "traddr": "10.0.0.2", 00:18:14.424 "trsvcid": "4420" 00:18:14.424 }, 00:18:14.424 "peer_address": { 00:18:14.424 "trtype": "TCP", 00:18:14.424 "adrfam": "IPv4", 00:18:14.424 "traddr": "10.0.0.1", 00:18:14.424 "trsvcid": "54486" 00:18:14.424 }, 00:18:14.424 "auth": { 00:18:14.424 "state": "completed", 00:18:14.424 "digest": "sha384", 00:18:14.424 "dhgroup": "null" 00:18:14.424 } 00:18:14.424 } 00:18:14.424 ]' 00:18:14.424 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.687 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.687 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.687 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:14.687 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.687 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.687 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.687 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.687 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:18:14.687 16:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:18:15.630 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.630 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.630 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.630 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.630 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.630 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.630 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.630 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:15.630 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:15.630 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:15.631 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.631 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:15.631 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:15.631 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:15.631 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.631 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.631 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.631 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.631 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.631 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.631 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.631 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.892 00:18:15.892 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.892 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.892 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.153 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.153 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.153 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.153 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.153 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.153 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.153 { 00:18:16.153 "cntlid": 57, 00:18:16.153 "qid": 0, 00:18:16.153 "state": "enabled", 00:18:16.153 "thread": "nvmf_tgt_poll_group_000", 00:18:16.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:16.153 "listen_address": { 00:18:16.153 "trtype": "TCP", 00:18:16.153 "adrfam": "IPv4", 00:18:16.153 "traddr": "10.0.0.2", 00:18:16.153 "trsvcid": "4420" 00:18:16.153 }, 00:18:16.153 "peer_address": { 00:18:16.153 "trtype": "TCP", 00:18:16.153 "adrfam": "IPv4", 00:18:16.153 "traddr": "10.0.0.1", 00:18:16.153 "trsvcid": "54518" 00:18:16.153 }, 00:18:16.153 "auth": { 00:18:16.153 "state": "completed", 00:18:16.153 "digest": "sha384", 00:18:16.153 "dhgroup": "ffdhe2048" 00:18:16.153 } 00:18:16.153 } 00:18:16.153 ]' 00:18:16.153 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.153 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.153 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.153 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:16.153 16:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.153 16:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.153 16:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.153 16:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.414 16:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:18:16.414 16:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:18:16.985 16:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.985 16:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.985 16:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.985 16:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.985 16:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.985 16:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.985 16:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:16.985 16:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:17.245 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:17.245 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.245 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:17.245 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:17.245 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:17.245 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.245 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.245 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.245 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.245 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.245 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.245 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.245 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.507 00:18:17.507 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.507 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.507 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.767 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.767 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.767 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.767 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.768 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.768 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.768 { 00:18:17.768 "cntlid": 59, 00:18:17.768 "qid": 0, 00:18:17.768 "state": "enabled", 00:18:17.768 "thread": "nvmf_tgt_poll_group_000", 00:18:17.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:17.768 "listen_address": { 00:18:17.768 "trtype": "TCP", 00:18:17.768 "adrfam": "IPv4", 00:18:17.768 "traddr": "10.0.0.2", 00:18:17.768 "trsvcid": "4420" 00:18:17.768 }, 00:18:17.768 "peer_address": { 00:18:17.768 "trtype": "TCP", 00:18:17.768 "adrfam": "IPv4", 00:18:17.768 "traddr": "10.0.0.1", 00:18:17.768 "trsvcid": "54556" 00:18:17.768 }, 00:18:17.768 "auth": { 00:18:17.768 "state": "completed", 00:18:17.768 "digest": "sha384", 00:18:17.768 "dhgroup": "ffdhe2048" 00:18:17.768 } 00:18:17.768 } 00:18:17.768 ]' 00:18:17.768 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.768 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:17.768 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.768 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:17.768 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.768 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.768 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.768 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.029 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:18:18.029 16:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:18:18.607 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.607 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.607 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.607 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.868 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.868 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.868 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:18.868 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:18.868 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:18.868 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.868 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:18.868 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:18.868 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:18.868 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.868 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.868 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.868 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.868 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.868 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.868 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.868 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.129 00:18:19.129 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.129 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.129 16:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.391 16:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.391 16:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.391 16:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.391 16:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.391 16:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.391 16:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.391 { 00:18:19.391 "cntlid": 61, 00:18:19.391 "qid": 0, 00:18:19.391 "state": "enabled", 00:18:19.391 "thread": "nvmf_tgt_poll_group_000", 00:18:19.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:19.391 "listen_address": { 00:18:19.391 "trtype": "TCP", 00:18:19.391 "adrfam": "IPv4", 00:18:19.391 "traddr": "10.0.0.2", 00:18:19.391 "trsvcid": "4420" 00:18:19.391 }, 00:18:19.391 "peer_address": { 00:18:19.391 "trtype": "TCP", 00:18:19.391 "adrfam": "IPv4", 00:18:19.391 "traddr": "10.0.0.1", 00:18:19.391 "trsvcid": "34576" 00:18:19.391 }, 00:18:19.391 "auth": { 00:18:19.391 "state": "completed", 00:18:19.391 "digest": "sha384", 00:18:19.391 "dhgroup": "ffdhe2048" 00:18:19.391 } 00:18:19.391 } 00:18:19.391 ]' 00:18:19.391 16:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.391 16:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.391 16:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.391 16:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:19.391 16:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.391 16:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.391 16:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.391 16:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.653 16:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:18:19.653 16:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:18:20.225 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.486 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.486 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.486 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.486 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.486 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.486 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:20.486 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:20.486 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:20.486 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.486 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:20.486 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:20.486 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:20.486 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.486 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:20.486 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.486 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.486 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.486 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:20.486 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.486 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.747 00:18:20.747 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.747 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.747 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.009 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.009 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.009 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.009 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.009 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.009 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.009 { 00:18:21.009 "cntlid": 63, 00:18:21.009 "qid": 0, 00:18:21.009 "state": "enabled", 00:18:21.009 "thread": "nvmf_tgt_poll_group_000", 00:18:21.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:21.009 "listen_address": { 00:18:21.009 "trtype": "TCP", 00:18:21.009 "adrfam": "IPv4", 00:18:21.009 "traddr": "10.0.0.2", 00:18:21.009 "trsvcid": "4420" 00:18:21.009 }, 00:18:21.009 "peer_address": { 00:18:21.009 "trtype": "TCP", 00:18:21.009 "adrfam": "IPv4", 00:18:21.009 "traddr": "10.0.0.1", 00:18:21.009 "trsvcid": "34602" 00:18:21.009 }, 00:18:21.009 "auth": { 00:18:21.009 "state": "completed", 00:18:21.009 "digest": "sha384", 00:18:21.009 "dhgroup": "ffdhe2048" 00:18:21.009 } 00:18:21.009 } 00:18:21.009 ]' 00:18:21.009 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.009 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.009 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.009 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:21.009 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.270 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.270 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.270 16:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.270 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:18:21.270 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:18:21.840 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.107 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.107 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.108 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.108 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.108 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:22.108 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.108 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:22.108 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:22.108 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:22.108 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.108 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:22.108 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:22.108 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:22.108 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.108 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.108 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.108 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.108 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.108 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.108 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.108 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.411 00:18:22.411 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.412 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.412 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.716 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.716 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.716 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.716 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.716 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.716 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.716 { 00:18:22.716 "cntlid": 65, 00:18:22.716 "qid": 0, 00:18:22.716 "state": "enabled", 00:18:22.716 "thread": "nvmf_tgt_poll_group_000", 00:18:22.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:22.716 "listen_address": { 00:18:22.716 "trtype": "TCP", 00:18:22.716 "adrfam": "IPv4", 00:18:22.716 "traddr": "10.0.0.2", 00:18:22.716 "trsvcid": "4420" 00:18:22.716 }, 00:18:22.716 "peer_address": { 00:18:22.716 "trtype": "TCP", 00:18:22.716 "adrfam": "IPv4", 00:18:22.716 "traddr": "10.0.0.1", 00:18:22.716 "trsvcid": "34634" 00:18:22.716 }, 00:18:22.716 "auth": { 00:18:22.716 "state": "completed", 00:18:22.716 "digest": "sha384", 00:18:22.716 "dhgroup": "ffdhe3072" 00:18:22.716 } 00:18:22.716 } 00:18:22.716 ]' 00:18:22.716 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.716 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.716 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.716 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:22.716 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.716 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.716 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.716 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.980 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:18:22.980 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:18:23.551 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.551 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.551 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.551 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.551 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.551 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.551 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:23.551 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:23.811 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:23.811 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.811 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:23.811 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:23.811 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:23.811 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.811 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.811 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.811 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.811 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.811 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.811 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.811 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.071 00:18:24.071 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.071 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.071 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.071 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.071 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.332 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.332 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.332 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.332 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.332 { 00:18:24.332 "cntlid": 67, 00:18:24.332 "qid": 0, 00:18:24.332 "state": "enabled", 00:18:24.332 "thread": "nvmf_tgt_poll_group_000", 00:18:24.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:24.332 "listen_address": { 00:18:24.332 "trtype": "TCP", 00:18:24.332 "adrfam": "IPv4", 00:18:24.332 "traddr": "10.0.0.2", 00:18:24.332 "trsvcid": "4420" 00:18:24.332 }, 00:18:24.332 "peer_address": { 00:18:24.332 "trtype": "TCP", 00:18:24.332 "adrfam": "IPv4", 00:18:24.332 "traddr": "10.0.0.1", 00:18:24.332 "trsvcid": "34680" 00:18:24.332 }, 00:18:24.332 "auth": { 00:18:24.332 "state": "completed", 00:18:24.332 "digest": "sha384", 00:18:24.332 "dhgroup": "ffdhe3072" 00:18:24.332 } 00:18:24.332 } 00:18:24.332 ]' 00:18:24.332 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.332 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.332 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.332 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:24.332 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.332 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.332 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.332 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.593 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:18:24.593 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:18:25.165 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.165 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.165 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.165 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.165 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.165 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.166 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:25.166 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:25.427 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:25.427 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.427 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:25.427 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:25.427 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:25.427 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.427 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.427 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.427 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.427 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.427 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.427 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.427 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.687 00:18:25.687 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.687 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.687 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.948 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.948 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.948 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.948 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.948 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.948 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.948 { 00:18:25.948 "cntlid": 69, 00:18:25.948 "qid": 0, 00:18:25.948 "state": "enabled", 00:18:25.948 "thread": "nvmf_tgt_poll_group_000", 00:18:25.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:25.948 "listen_address": { 00:18:25.948 "trtype": "TCP", 00:18:25.948 "adrfam": "IPv4", 00:18:25.948 "traddr": "10.0.0.2", 00:18:25.948 "trsvcid": "4420" 00:18:25.948 }, 00:18:25.948 "peer_address": { 00:18:25.948 "trtype": "TCP", 00:18:25.948 "adrfam": "IPv4", 00:18:25.948 "traddr": "10.0.0.1", 00:18:25.948 "trsvcid": "34710" 00:18:25.948 }, 00:18:25.948 "auth": { 00:18:25.948 "state": "completed", 00:18:25.948 "digest": "sha384", 00:18:25.948 "dhgroup": "ffdhe3072" 00:18:25.948 } 00:18:25.948 } 00:18:25.948 ]' 00:18:25.948 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.948 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.948 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.948 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:25.948 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.948 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.948 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.948 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.208 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:18:26.208 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:18:26.779 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.779 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.779 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.779 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.779 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.779 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.779 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:26.779 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:27.040 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:27.040 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.040 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:27.040 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:27.040 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:27.040 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.040 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:27.040 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.040 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.040 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.040 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:27.040 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.040 16:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.301 00:18:27.301 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.301 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.301 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.301 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.301 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.301 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.301 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.561 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.561 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.561 { 00:18:27.561 "cntlid": 71, 00:18:27.561 "qid": 0, 00:18:27.561 "state": "enabled", 00:18:27.561 "thread": "nvmf_tgt_poll_group_000", 00:18:27.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:27.561 "listen_address": { 00:18:27.561 "trtype": "TCP", 00:18:27.561 "adrfam": "IPv4", 00:18:27.561 "traddr": "10.0.0.2", 00:18:27.561 "trsvcid": "4420" 00:18:27.561 }, 00:18:27.561 "peer_address": { 00:18:27.561 "trtype": "TCP", 00:18:27.561 "adrfam": "IPv4", 00:18:27.561 "traddr": "10.0.0.1", 00:18:27.561 "trsvcid": "34740" 00:18:27.561 }, 00:18:27.561 "auth": { 00:18:27.561 "state": "completed", 00:18:27.561 "digest": "sha384", 00:18:27.562 "dhgroup": "ffdhe3072" 00:18:27.562 } 00:18:27.562 } 00:18:27.562 ]' 00:18:27.562 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.562 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.562 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.562 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:27.562 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.562 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.562 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.562 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.823 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:18:27.823 16:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:18:28.394 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.394 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.394 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.394 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.394 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.394 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.394 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.394 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:28.394 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:28.655 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:28.655 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.655 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:28.655 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:28.655 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:28.655 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.655 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.655 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.655 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.656 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.656 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.656 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.656 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.916 00:18:28.916 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.916 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.916 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.177 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.177 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.177 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.177 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.177 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.177 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.177 { 00:18:29.177 "cntlid": 73, 00:18:29.177 "qid": 0, 00:18:29.177 "state": "enabled", 00:18:29.177 "thread": "nvmf_tgt_poll_group_000", 00:18:29.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:29.177 "listen_address": { 00:18:29.177 "trtype": "TCP", 00:18:29.177 "adrfam": "IPv4", 00:18:29.177 "traddr": "10.0.0.2", 00:18:29.177 "trsvcid": "4420" 00:18:29.177 }, 00:18:29.177 "peer_address": { 00:18:29.177 "trtype": "TCP", 00:18:29.177 "adrfam": "IPv4", 00:18:29.177 "traddr": "10.0.0.1", 00:18:29.177 "trsvcid": "34768" 00:18:29.177 }, 00:18:29.177 "auth": { 00:18:29.177 "state": "completed", 00:18:29.177 "digest": "sha384", 00:18:29.177 "dhgroup": "ffdhe4096" 00:18:29.177 } 00:18:29.177 } 00:18:29.177 ]' 00:18:29.177 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.177 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.177 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.177 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:29.177 16:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.177 16:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.177 16:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.177 16:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.437 16:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:18:29.437 16:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:18:30.008 16:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.008 16:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:30.009 16:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.009 16:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.009 16:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.009 16:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.009 16:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:30.009 16:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:30.270 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:30.270 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.270 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:30.270 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:30.270 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:30.270 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.270 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.270 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.270 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.270 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.270 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.270 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.270 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.531 00:18:30.531 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.531 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.531 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.791 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.791 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.791 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.791 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.791 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.791 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.791 { 00:18:30.791 "cntlid": 75, 00:18:30.791 "qid": 0, 00:18:30.791 "state": "enabled", 00:18:30.792 "thread": "nvmf_tgt_poll_group_000", 00:18:30.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:30.792 "listen_address": { 00:18:30.792 "trtype": "TCP", 00:18:30.792 "adrfam": "IPv4", 00:18:30.792 "traddr": "10.0.0.2", 00:18:30.792 "trsvcid": "4420" 00:18:30.792 }, 00:18:30.792 "peer_address": { 00:18:30.792 "trtype": "TCP", 00:18:30.792 "adrfam": "IPv4", 00:18:30.792 "traddr": "10.0.0.1", 00:18:30.792 "trsvcid": "39160" 00:18:30.792 }, 00:18:30.792 "auth": { 00:18:30.792 "state": "completed", 00:18:30.792 "digest": "sha384", 00:18:30.792 "dhgroup": "ffdhe4096" 00:18:30.792 } 00:18:30.792 } 00:18:30.792 ]' 00:18:30.792 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.792 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.792 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.792 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:30.792 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.792 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.792 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.792 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.052 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:18:31.052 16:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:18:31.622 16:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.884 16:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.884 16:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.884 16:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.884 16:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.884 16:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.884 16:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:31.884 16:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:31.884 16:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:31.884 16:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.884 16:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:31.884 16:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:31.884 16:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:31.884 16:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.884 16:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.884 16:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.884 16:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.884 16:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.884 16:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.884 16:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.884 16:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.144 00:18:32.405 16:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.405 16:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.405 16:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.405 16:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.405 16:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.405 16:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.405 16:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.405 16:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.405 16:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.405 { 00:18:32.405 "cntlid": 77, 00:18:32.405 "qid": 0, 00:18:32.405 "state": "enabled", 00:18:32.405 "thread": "nvmf_tgt_poll_group_000", 00:18:32.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:32.405 "listen_address": { 00:18:32.405 "trtype": "TCP", 00:18:32.405 "adrfam": "IPv4", 00:18:32.405 "traddr": "10.0.0.2", 00:18:32.405 "trsvcid": "4420" 00:18:32.405 }, 00:18:32.405 "peer_address": { 00:18:32.405 "trtype": "TCP", 00:18:32.405 "adrfam": "IPv4", 00:18:32.405 "traddr": "10.0.0.1", 00:18:32.405 "trsvcid": "39178" 00:18:32.405 }, 00:18:32.405 "auth": { 00:18:32.405 "state": "completed", 00:18:32.405 "digest": "sha384", 00:18:32.405 "dhgroup": "ffdhe4096" 00:18:32.405 } 00:18:32.405 } 00:18:32.405 ]' 00:18:32.405 16:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.405 16:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.405 16:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.665 16:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:32.665 16:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.665 16:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.665 16:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.665 16:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.665 16:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:18:32.665 16:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:18:33.607 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.607 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.607 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.607 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.607 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.607 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.607 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:33.607 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:33.607 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:33.607 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.607 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:33.607 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:33.607 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:33.607 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.607 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:33.607 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.607 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.607 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.607 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:33.607 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.607 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.867 00:18:33.867 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.867 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.868 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.128 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.128 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.128 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.128 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.128 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.128 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.128 { 00:18:34.128 "cntlid": 79, 00:18:34.128 "qid": 0, 00:18:34.128 "state": "enabled", 00:18:34.128 "thread": "nvmf_tgt_poll_group_000", 00:18:34.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:34.128 "listen_address": { 00:18:34.128 "trtype": "TCP", 00:18:34.128 "adrfam": "IPv4", 00:18:34.128 "traddr": "10.0.0.2", 00:18:34.128 "trsvcid": "4420" 00:18:34.128 }, 00:18:34.128 "peer_address": { 00:18:34.128 "trtype": "TCP", 00:18:34.128 "adrfam": "IPv4", 00:18:34.128 "traddr": "10.0.0.1", 00:18:34.128 "trsvcid": "39196" 00:18:34.128 }, 00:18:34.128 "auth": { 00:18:34.128 "state": "completed", 00:18:34.128 "digest": "sha384", 00:18:34.128 "dhgroup": "ffdhe4096" 00:18:34.128 } 00:18:34.128 } 00:18:34.128 ]' 00:18:34.128 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.128 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.128 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.129 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:34.129 16:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.129 16:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.129 16:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.129 16:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.390 16:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:18:34.390 16:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:18:34.962 16:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.962 16:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.962 16:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.962 16:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.962 16:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.962 16:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.962 16:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.962 16:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:34.962 16:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:35.224 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:35.224 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.224 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:35.224 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:35.224 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:35.224 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.224 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.224 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.224 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.224 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.224 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.224 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.224 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.484 00:18:35.484 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.484 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.484 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.745 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.745 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.745 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.745 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.745 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.745 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.745 { 00:18:35.745 "cntlid": 81, 00:18:35.745 "qid": 0, 00:18:35.745 "state": "enabled", 00:18:35.745 "thread": "nvmf_tgt_poll_group_000", 00:18:35.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:35.745 "listen_address": { 00:18:35.745 "trtype": "TCP", 00:18:35.745 "adrfam": "IPv4", 00:18:35.745 "traddr": "10.0.0.2", 00:18:35.745 "trsvcid": "4420" 00:18:35.745 }, 00:18:35.745 "peer_address": { 00:18:35.745 "trtype": "TCP", 00:18:35.745 "adrfam": "IPv4", 00:18:35.745 "traddr": "10.0.0.1", 00:18:35.745 "trsvcid": "39228" 00:18:35.745 }, 00:18:35.745 "auth": { 00:18:35.745 "state": "completed", 00:18:35.745 "digest": "sha384", 00:18:35.745 "dhgroup": "ffdhe6144" 00:18:35.745 } 00:18:35.745 } 00:18:35.745 ]' 00:18:35.745 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.745 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.745 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.745 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:35.745 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.006 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.006 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.006 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.006 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:18:36.006 16:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:18:36.577 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.838 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.838 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.838 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.838 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.838 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.838 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:36.838 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:36.838 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:36.838 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.838 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:36.838 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:36.838 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:36.838 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.838 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.838 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.838 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.838 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.838 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.838 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.838 16:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.099 00:18:37.360 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:37.360 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:37.360 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.360 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.360 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.360 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.360 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.360 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.360 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:37.360 { 00:18:37.360 "cntlid": 83, 00:18:37.360 "qid": 0, 00:18:37.360 "state": "enabled", 00:18:37.360 "thread": "nvmf_tgt_poll_group_000", 00:18:37.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:37.360 "listen_address": { 00:18:37.360 "trtype": "TCP", 00:18:37.360 "adrfam": "IPv4", 00:18:37.360 "traddr": "10.0.0.2", 00:18:37.360 "trsvcid": "4420" 00:18:37.360 }, 00:18:37.360 "peer_address": { 00:18:37.360 "trtype": "TCP", 00:18:37.360 "adrfam": "IPv4", 00:18:37.360 "traddr": "10.0.0.1", 00:18:37.360 "trsvcid": "39250" 00:18:37.360 }, 00:18:37.360 "auth": { 00:18:37.360 "state": "completed", 00:18:37.360 "digest": "sha384", 00:18:37.360 "dhgroup": "ffdhe6144" 00:18:37.360 } 00:18:37.360 } 00:18:37.360 ]' 00:18:37.360 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.360 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.360 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.620 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:37.620 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.620 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.620 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.620 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.620 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:18:37.620 16:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:18:38.563 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.563 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.563 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.563 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.563 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.563 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.563 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:38.563 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:38.563 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:38.563 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.563 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:38.563 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:38.563 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:38.563 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.563 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.563 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.563 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.563 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.563 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.563 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.563 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.824 00:18:38.824 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.824 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.824 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.085 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.085 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.085 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.085 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.085 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.085 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.085 { 00:18:39.085 "cntlid": 85, 00:18:39.085 "qid": 0, 00:18:39.085 "state": "enabled", 00:18:39.085 "thread": "nvmf_tgt_poll_group_000", 00:18:39.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:39.085 "listen_address": { 00:18:39.085 "trtype": "TCP", 00:18:39.085 "adrfam": "IPv4", 00:18:39.085 "traddr": "10.0.0.2", 00:18:39.085 "trsvcid": "4420" 00:18:39.085 }, 00:18:39.085 "peer_address": { 00:18:39.085 "trtype": "TCP", 00:18:39.085 "adrfam": "IPv4", 00:18:39.085 "traddr": "10.0.0.1", 00:18:39.085 "trsvcid": "39282" 00:18:39.085 }, 00:18:39.085 "auth": { 00:18:39.085 "state": "completed", 00:18:39.085 "digest": "sha384", 00:18:39.085 "dhgroup": "ffdhe6144" 00:18:39.085 } 00:18:39.085 } 00:18:39.085 ]' 00:18:39.085 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.085 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.085 16:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.085 16:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:39.085 16:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.345 16:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.345 16:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.345 16:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.345 16:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:18:39.345 16:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:18:40.287 16:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.287 16:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.287 16:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.287 16:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.287 16:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.287 16:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.287 16:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:40.287 16:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:40.287 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:40.287 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.287 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:40.287 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:40.287 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:40.287 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.287 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:40.287 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.287 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.287 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.287 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:40.287 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:40.287 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:40.547 00:18:40.547 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.547 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.547 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.807 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.807 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.807 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.807 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.807 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.807 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.807 { 00:18:40.807 "cntlid": 87, 00:18:40.807 "qid": 0, 00:18:40.807 "state": "enabled", 00:18:40.807 "thread": "nvmf_tgt_poll_group_000", 00:18:40.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:40.807 "listen_address": { 00:18:40.807 "trtype": "TCP", 00:18:40.807 "adrfam": "IPv4", 00:18:40.807 "traddr": "10.0.0.2", 00:18:40.807 "trsvcid": "4420" 00:18:40.807 }, 00:18:40.807 "peer_address": { 00:18:40.807 "trtype": "TCP", 00:18:40.807 "adrfam": "IPv4", 00:18:40.807 "traddr": "10.0.0.1", 00:18:40.807 "trsvcid": "56318" 00:18:40.807 }, 00:18:40.807 "auth": { 00:18:40.807 "state": "completed", 00:18:40.807 "digest": "sha384", 00:18:40.807 "dhgroup": "ffdhe6144" 00:18:40.807 } 00:18:40.807 } 00:18:40.807 ]' 00:18:40.807 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.807 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.807 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.807 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:40.807 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.807 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.807 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.807 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.067 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:18:41.068 16:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:18:41.638 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.638 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.638 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.638 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.899 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.899 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.899 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.899 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:41.899 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:41.899 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:41.899 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.899 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:41.899 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:41.899 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:41.899 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.899 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.899 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.899 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.899 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.899 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.899 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.899 16:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.470 00:18:42.470 16:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.470 16:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.470 16:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.732 16:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.732 16:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.732 16:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.732 16:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.732 16:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.732 16:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.732 { 00:18:42.732 "cntlid": 89, 00:18:42.732 "qid": 0, 00:18:42.732 "state": "enabled", 00:18:42.732 "thread": "nvmf_tgt_poll_group_000", 00:18:42.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:42.732 "listen_address": { 00:18:42.732 "trtype": "TCP", 00:18:42.732 "adrfam": "IPv4", 00:18:42.732 "traddr": "10.0.0.2", 00:18:42.732 "trsvcid": "4420" 00:18:42.732 }, 00:18:42.732 "peer_address": { 00:18:42.732 "trtype": "TCP", 00:18:42.732 "adrfam": "IPv4", 00:18:42.732 "traddr": "10.0.0.1", 00:18:42.732 "trsvcid": "56334" 00:18:42.732 }, 00:18:42.732 "auth": { 00:18:42.732 "state": "completed", 00:18:42.732 "digest": "sha384", 00:18:42.732 "dhgroup": "ffdhe8192" 00:18:42.732 } 00:18:42.732 } 00:18:42.732 ]' 00:18:42.732 16:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.732 16:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.732 16:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.732 16:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:42.732 16:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.732 16:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.732 16:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.732 16:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.993 16:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:18:42.993 16:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:18:43.563 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.563 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.563 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.563 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.563 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.563 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.563 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:43.563 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:43.823 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:43.823 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.823 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:43.823 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:43.823 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:43.823 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.823 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.823 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.823 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.823 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.823 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.823 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.823 16:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.394 00:18:44.394 16:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.394 16:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.394 16:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.394 16:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.394 16:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.394 16:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.394 16:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.394 16:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.394 16:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.394 { 00:18:44.394 "cntlid": 91, 00:18:44.394 "qid": 0, 00:18:44.394 "state": "enabled", 00:18:44.394 "thread": "nvmf_tgt_poll_group_000", 00:18:44.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:44.394 "listen_address": { 00:18:44.394 "trtype": "TCP", 00:18:44.394 "adrfam": "IPv4", 00:18:44.394 "traddr": "10.0.0.2", 00:18:44.394 "trsvcid": "4420" 00:18:44.394 }, 00:18:44.394 "peer_address": { 00:18:44.394 "trtype": "TCP", 00:18:44.394 "adrfam": "IPv4", 00:18:44.394 "traddr": "10.0.0.1", 00:18:44.394 "trsvcid": "56372" 00:18:44.394 }, 00:18:44.394 "auth": { 00:18:44.394 "state": "completed", 00:18:44.394 "digest": "sha384", 00:18:44.394 "dhgroup": "ffdhe8192" 00:18:44.394 } 00:18:44.394 } 00:18:44.394 ]' 00:18:44.394 16:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.656 16:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.656 16:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.656 16:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:44.656 16:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.656 16:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.656 16:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.656 16:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.656 16:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:18:44.656 16:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:18:45.598 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.598 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.598 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.598 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.598 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.598 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.598 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:45.598 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:45.598 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:45.598 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.599 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:45.599 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:45.599 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:45.599 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.599 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.599 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.599 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.599 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.599 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.599 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.599 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.170 00:18:46.171 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.171 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.171 16:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.431 16:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.431 16:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.431 16:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.431 16:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.431 16:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.431 16:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.431 { 00:18:46.431 "cntlid": 93, 00:18:46.431 "qid": 0, 00:18:46.431 "state": "enabled", 00:18:46.431 "thread": "nvmf_tgt_poll_group_000", 00:18:46.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:46.431 "listen_address": { 00:18:46.431 "trtype": "TCP", 00:18:46.431 "adrfam": "IPv4", 00:18:46.431 "traddr": "10.0.0.2", 00:18:46.432 "trsvcid": "4420" 00:18:46.432 }, 00:18:46.432 "peer_address": { 00:18:46.432 "trtype": "TCP", 00:18:46.432 "adrfam": "IPv4", 00:18:46.432 "traddr": "10.0.0.1", 00:18:46.432 "trsvcid": "56390" 00:18:46.432 }, 00:18:46.432 "auth": { 00:18:46.432 "state": "completed", 00:18:46.432 "digest": "sha384", 00:18:46.432 "dhgroup": "ffdhe8192" 00:18:46.432 } 00:18:46.432 } 00:18:46.432 ]' 00:18:46.432 16:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.432 16:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.432 16:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.432 16:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:46.432 16:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.432 16:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.432 16:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.432 16:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.692 16:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:18:46.692 16:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:18:47.264 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.264 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.264 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.264 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.264 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.264 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.264 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:47.264 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:47.526 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:47.526 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.526 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:47.526 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:47.526 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:47.526 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.526 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:47.526 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.526 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.526 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.526 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:47.526 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:47.526 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:48.096 00:18:48.096 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.096 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.096 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.096 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.096 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.096 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.096 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.096 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.096 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.096 { 00:18:48.096 "cntlid": 95, 00:18:48.096 "qid": 0, 00:18:48.096 "state": "enabled", 00:18:48.096 "thread": "nvmf_tgt_poll_group_000", 00:18:48.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:48.096 "listen_address": { 00:18:48.096 "trtype": "TCP", 00:18:48.096 "adrfam": "IPv4", 00:18:48.096 "traddr": "10.0.0.2", 00:18:48.096 "trsvcid": "4420" 00:18:48.096 }, 00:18:48.097 "peer_address": { 00:18:48.097 "trtype": "TCP", 00:18:48.097 "adrfam": "IPv4", 00:18:48.097 "traddr": "10.0.0.1", 00:18:48.097 "trsvcid": "56410" 00:18:48.097 }, 00:18:48.097 "auth": { 00:18:48.097 "state": "completed", 00:18:48.097 "digest": "sha384", 00:18:48.097 "dhgroup": "ffdhe8192" 00:18:48.097 } 00:18:48.097 } 00:18:48.097 ]' 00:18:48.097 16:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.356 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.356 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.356 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:48.356 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.356 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.356 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.356 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.616 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:18:48.616 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:18:49.186 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.186 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.186 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.186 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.186 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.186 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:49.186 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.186 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.186 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:49.186 16:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:49.186 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:49.186 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.186 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:49.186 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:49.186 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:49.186 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.186 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.186 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.186 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.186 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.186 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.186 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.186 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.446 00:18:49.446 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.447 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.447 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.706 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.706 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.706 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.706 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.706 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.706 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.706 { 00:18:49.706 "cntlid": 97, 00:18:49.706 "qid": 0, 00:18:49.706 "state": "enabled", 00:18:49.706 "thread": "nvmf_tgt_poll_group_000", 00:18:49.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:49.706 "listen_address": { 00:18:49.706 "trtype": "TCP", 00:18:49.706 "adrfam": "IPv4", 00:18:49.706 "traddr": "10.0.0.2", 00:18:49.706 "trsvcid": "4420" 00:18:49.706 }, 00:18:49.706 "peer_address": { 00:18:49.706 "trtype": "TCP", 00:18:49.706 "adrfam": "IPv4", 00:18:49.706 "traddr": "10.0.0.1", 00:18:49.706 "trsvcid": "47310" 00:18:49.706 }, 00:18:49.706 "auth": { 00:18:49.706 "state": "completed", 00:18:49.706 "digest": "sha512", 00:18:49.706 "dhgroup": "null" 00:18:49.706 } 00:18:49.706 } 00:18:49.706 ]' 00:18:49.706 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.706 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.706 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.706 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:49.706 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.706 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.706 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.706 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.966 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:18:49.966 16:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:18:50.535 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.795 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.795 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.795 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.795 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.795 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.795 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:50.795 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:50.795 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:50.795 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.795 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:50.795 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:50.795 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:50.795 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.795 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.795 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.795 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.795 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.795 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.795 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.795 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.055 00:18:51.055 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.055 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.055 16:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.315 16:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.316 16:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.316 16:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.316 16:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.316 16:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.316 16:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.316 { 00:18:51.316 "cntlid": 99, 00:18:51.316 "qid": 0, 00:18:51.316 "state": "enabled", 00:18:51.316 "thread": "nvmf_tgt_poll_group_000", 00:18:51.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:51.316 "listen_address": { 00:18:51.316 "trtype": "TCP", 00:18:51.316 "adrfam": "IPv4", 00:18:51.316 "traddr": "10.0.0.2", 00:18:51.316 "trsvcid": "4420" 00:18:51.316 }, 00:18:51.316 "peer_address": { 00:18:51.316 "trtype": "TCP", 00:18:51.316 "adrfam": "IPv4", 00:18:51.316 "traddr": "10.0.0.1", 00:18:51.316 "trsvcid": "47342" 00:18:51.316 }, 00:18:51.316 "auth": { 00:18:51.316 "state": "completed", 00:18:51.316 "digest": "sha512", 00:18:51.316 "dhgroup": "null" 00:18:51.316 } 00:18:51.316 } 00:18:51.316 ]' 00:18:51.316 16:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.316 16:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:51.316 16:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.316 16:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:51.316 16:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.316 16:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.316 16:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.316 16:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.576 16:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:18:51.576 16:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:18:52.147 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.147 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.147 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.147 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.147 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.147 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.147 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:52.147 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:52.406 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:52.406 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.406 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:52.406 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:52.406 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:52.406 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.406 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.406 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.406 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.406 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.406 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.406 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.406 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.666 00:18:52.666 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.666 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.666 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.927 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.927 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.927 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.927 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.927 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.927 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.927 { 00:18:52.927 "cntlid": 101, 00:18:52.927 "qid": 0, 00:18:52.927 "state": "enabled", 00:18:52.927 "thread": "nvmf_tgt_poll_group_000", 00:18:52.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:52.927 "listen_address": { 00:18:52.927 "trtype": "TCP", 00:18:52.927 "adrfam": "IPv4", 00:18:52.927 "traddr": "10.0.0.2", 00:18:52.927 "trsvcid": "4420" 00:18:52.927 }, 00:18:52.927 "peer_address": { 00:18:52.927 "trtype": "TCP", 00:18:52.927 "adrfam": "IPv4", 00:18:52.927 "traddr": "10.0.0.1", 00:18:52.927 "trsvcid": "47372" 00:18:52.927 }, 00:18:52.927 "auth": { 00:18:52.927 "state": "completed", 00:18:52.927 "digest": "sha512", 00:18:52.927 "dhgroup": "null" 00:18:52.927 } 00:18:52.927 } 00:18:52.927 ]' 00:18:52.927 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.927 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.927 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.927 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:52.927 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.927 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.927 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.927 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.188 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:18:53.188 16:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:18:53.758 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.758 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:53.758 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.758 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.758 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.758 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.758 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:53.758 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:54.018 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:54.019 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.019 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:54.019 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:54.019 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:54.019 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.019 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:54.019 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.019 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.019 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.019 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:54.019 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.019 16:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.279 00:18:54.279 16:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.279 16:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.279 16:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.539 16:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.539 16:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.539 16:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.539 16:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.539 16:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.539 16:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.539 { 00:18:54.539 "cntlid": 103, 00:18:54.539 "qid": 0, 00:18:54.539 "state": "enabled", 00:18:54.539 "thread": "nvmf_tgt_poll_group_000", 00:18:54.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:54.539 "listen_address": { 00:18:54.539 "trtype": "TCP", 00:18:54.539 "adrfam": "IPv4", 00:18:54.539 "traddr": "10.0.0.2", 00:18:54.539 "trsvcid": "4420" 00:18:54.540 }, 00:18:54.540 "peer_address": { 00:18:54.540 "trtype": "TCP", 00:18:54.540 "adrfam": "IPv4", 00:18:54.540 "traddr": "10.0.0.1", 00:18:54.540 "trsvcid": "47406" 00:18:54.540 }, 00:18:54.540 "auth": { 00:18:54.540 "state": "completed", 00:18:54.540 "digest": "sha512", 00:18:54.540 "dhgroup": "null" 00:18:54.540 } 00:18:54.540 } 00:18:54.540 ]' 00:18:54.540 16:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.540 16:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.540 16:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.540 16:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:54.540 16:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.540 16:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.540 16:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.540 16:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.800 16:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:18:54.800 16:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:18:55.371 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.371 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:55.371 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.371 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.371 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.371 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.371 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.371 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:55.371 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:55.631 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:55.631 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.631 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:55.631 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:55.631 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:55.631 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.631 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.631 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.631 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.631 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.631 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.632 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.632 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.892 00:18:55.892 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.892 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.892 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.152 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.152 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.152 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.152 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.152 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.152 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.152 { 00:18:56.152 "cntlid": 105, 00:18:56.152 "qid": 0, 00:18:56.152 "state": "enabled", 00:18:56.152 "thread": "nvmf_tgt_poll_group_000", 00:18:56.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:56.152 "listen_address": { 00:18:56.152 "trtype": "TCP", 00:18:56.152 "adrfam": "IPv4", 00:18:56.152 "traddr": "10.0.0.2", 00:18:56.152 "trsvcid": "4420" 00:18:56.152 }, 00:18:56.152 "peer_address": { 00:18:56.152 "trtype": "TCP", 00:18:56.152 "adrfam": "IPv4", 00:18:56.152 "traddr": "10.0.0.1", 00:18:56.152 "trsvcid": "47440" 00:18:56.152 }, 00:18:56.152 "auth": { 00:18:56.152 "state": "completed", 00:18:56.152 "digest": "sha512", 00:18:56.152 "dhgroup": "ffdhe2048" 00:18:56.152 } 00:18:56.152 } 00:18:56.152 ]' 00:18:56.152 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.152 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.152 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.152 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:56.152 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.152 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.152 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.152 16:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.413 16:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:18:56.413 16:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:18:56.984 16:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.984 16:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.984 16:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.984 16:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.984 16:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.984 16:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.984 16:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:56.984 16:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:57.245 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:57.245 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.245 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:57.245 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:57.245 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:57.245 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.245 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.245 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.245 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.245 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.245 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.245 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.246 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.507 00:18:57.507 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.507 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.507 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.767 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.767 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.767 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.767 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.767 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.767 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.767 { 00:18:57.767 "cntlid": 107, 00:18:57.767 "qid": 0, 00:18:57.767 "state": "enabled", 00:18:57.767 "thread": "nvmf_tgt_poll_group_000", 00:18:57.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:57.767 "listen_address": { 00:18:57.767 "trtype": "TCP", 00:18:57.767 "adrfam": "IPv4", 00:18:57.767 "traddr": "10.0.0.2", 00:18:57.767 "trsvcid": "4420" 00:18:57.767 }, 00:18:57.767 "peer_address": { 00:18:57.767 "trtype": "TCP", 00:18:57.767 "adrfam": "IPv4", 00:18:57.767 "traddr": "10.0.0.1", 00:18:57.767 "trsvcid": "47468" 00:18:57.767 }, 00:18:57.767 "auth": { 00:18:57.767 "state": "completed", 00:18:57.767 "digest": "sha512", 00:18:57.767 "dhgroup": "ffdhe2048" 00:18:57.767 } 00:18:57.767 } 00:18:57.767 ]' 00:18:57.767 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.767 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.767 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.767 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:57.767 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.767 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.767 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.767 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.027 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:18:58.027 16:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:18:58.598 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.599 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:58.599 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.599 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.599 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.599 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.599 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:58.599 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:58.859 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:58.859 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.859 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:58.859 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:58.859 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:58.859 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.859 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.859 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.859 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.859 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.859 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.859 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.859 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.120 00:18:59.120 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.120 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.120 16:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.409 16:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.409 16:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.409 16:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.409 16:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.409 16:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.409 16:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.409 { 00:18:59.409 "cntlid": 109, 00:18:59.409 "qid": 0, 00:18:59.409 "state": "enabled", 00:18:59.409 "thread": "nvmf_tgt_poll_group_000", 00:18:59.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:59.409 "listen_address": { 00:18:59.409 "trtype": "TCP", 00:18:59.409 "adrfam": "IPv4", 00:18:59.409 "traddr": "10.0.0.2", 00:18:59.409 "trsvcid": "4420" 00:18:59.409 }, 00:18:59.409 "peer_address": { 00:18:59.409 "trtype": "TCP", 00:18:59.409 "adrfam": "IPv4", 00:18:59.409 "traddr": "10.0.0.1", 00:18:59.409 "trsvcid": "51806" 00:18:59.409 }, 00:18:59.409 "auth": { 00:18:59.409 "state": "completed", 00:18:59.409 "digest": "sha512", 00:18:59.409 "dhgroup": "ffdhe2048" 00:18:59.409 } 00:18:59.409 } 00:18:59.409 ]' 00:18:59.409 16:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.409 16:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.409 16:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.409 16:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:59.409 16:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.409 16:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.409 16:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.409 16:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.756 16:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:18:59.756 16:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:19:00.349 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.349 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:00.349 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.349 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.349 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.349 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.349 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:00.349 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:00.349 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:19:00.349 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.349 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:00.349 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:00.349 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:00.349 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.349 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:00.349 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.349 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.349 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.349 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:00.349 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.349 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.610 00:19:00.610 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.610 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.610 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.870 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.870 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.871 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.871 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.871 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.871 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.871 { 00:19:00.871 "cntlid": 111, 00:19:00.871 "qid": 0, 00:19:00.871 "state": "enabled", 00:19:00.871 "thread": "nvmf_tgt_poll_group_000", 00:19:00.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:00.871 "listen_address": { 00:19:00.871 "trtype": "TCP", 00:19:00.871 "adrfam": "IPv4", 00:19:00.871 "traddr": "10.0.0.2", 00:19:00.871 "trsvcid": "4420" 00:19:00.871 }, 00:19:00.871 "peer_address": { 00:19:00.871 "trtype": "TCP", 00:19:00.871 "adrfam": "IPv4", 00:19:00.871 "traddr": "10.0.0.1", 00:19:00.871 "trsvcid": "51840" 00:19:00.871 }, 00:19:00.871 "auth": { 00:19:00.871 "state": "completed", 00:19:00.871 "digest": "sha512", 00:19:00.871 "dhgroup": "ffdhe2048" 00:19:00.871 } 00:19:00.871 } 00:19:00.871 ]' 00:19:00.871 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.871 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.871 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.871 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:00.871 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.131 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.131 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.131 16:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.131 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:19:01.131 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:19:01.704 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.964 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:01.964 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.964 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.964 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.964 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.964 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.965 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:01.965 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:01.965 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:19:01.965 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.965 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:01.965 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:01.965 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:01.965 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.965 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.965 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.965 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.965 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.965 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.965 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.965 16:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.225 00:19:02.225 16:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.225 16:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.225 16:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.487 16:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.487 16:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.487 16:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.487 16:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.487 16:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.487 16:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.487 { 00:19:02.487 "cntlid": 113, 00:19:02.487 "qid": 0, 00:19:02.487 "state": "enabled", 00:19:02.487 "thread": "nvmf_tgt_poll_group_000", 00:19:02.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:02.487 "listen_address": { 00:19:02.487 "trtype": "TCP", 00:19:02.487 "adrfam": "IPv4", 00:19:02.487 "traddr": "10.0.0.2", 00:19:02.487 "trsvcid": "4420" 00:19:02.487 }, 00:19:02.487 "peer_address": { 00:19:02.487 "trtype": "TCP", 00:19:02.487 "adrfam": "IPv4", 00:19:02.487 "traddr": "10.0.0.1", 00:19:02.487 "trsvcid": "51860" 00:19:02.487 }, 00:19:02.487 "auth": { 00:19:02.487 "state": "completed", 00:19:02.487 "digest": "sha512", 00:19:02.487 "dhgroup": "ffdhe3072" 00:19:02.487 } 00:19:02.487 } 00:19:02.487 ]' 00:19:02.487 16:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.487 16:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.487 16:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.487 16:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:02.487 16:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.750 16:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.750 16:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.750 16:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.750 16:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:19:02.750 16:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:19:03.693 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.693 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.693 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.693 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.693 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.693 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.693 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:03.693 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:03.693 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:19:03.693 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.693 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:03.693 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:03.693 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:03.693 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.693 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.693 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.693 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.693 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.693 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.693 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.693 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.954 00:19:03.954 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.954 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.954 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.215 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.215 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.215 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.215 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.215 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.215 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.215 { 00:19:04.215 "cntlid": 115, 00:19:04.215 "qid": 0, 00:19:04.215 "state": "enabled", 00:19:04.215 "thread": "nvmf_tgt_poll_group_000", 00:19:04.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:04.215 "listen_address": { 00:19:04.215 "trtype": "TCP", 00:19:04.215 "adrfam": "IPv4", 00:19:04.215 "traddr": "10.0.0.2", 00:19:04.215 "trsvcid": "4420" 00:19:04.215 }, 00:19:04.215 "peer_address": { 00:19:04.215 "trtype": "TCP", 00:19:04.215 "adrfam": "IPv4", 00:19:04.215 "traddr": "10.0.0.1", 00:19:04.215 "trsvcid": "51892" 00:19:04.215 }, 00:19:04.215 "auth": { 00:19:04.215 "state": "completed", 00:19:04.215 "digest": "sha512", 00:19:04.215 "dhgroup": "ffdhe3072" 00:19:04.215 } 00:19:04.215 } 00:19:04.215 ]' 00:19:04.215 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.215 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.215 16:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.215 16:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:04.215 16:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.215 16:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.215 16:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.215 16:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.476 16:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:19:04.476 16:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:19:05.049 16:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.049 16:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:05.050 16:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.050 16:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.050 16:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.050 16:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.050 16:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:05.050 16:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:05.312 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:19:05.312 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.312 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:05.312 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:05.312 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:05.312 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.312 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.312 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.312 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.312 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.312 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.312 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.312 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.572 00:19:05.572 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.572 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.572 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.833 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.833 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.833 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.833 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.833 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.833 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.833 { 00:19:05.833 "cntlid": 117, 00:19:05.833 "qid": 0, 00:19:05.833 "state": "enabled", 00:19:05.833 "thread": "nvmf_tgt_poll_group_000", 00:19:05.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:05.833 "listen_address": { 00:19:05.833 "trtype": "TCP", 00:19:05.833 "adrfam": "IPv4", 00:19:05.833 "traddr": "10.0.0.2", 00:19:05.833 "trsvcid": "4420" 00:19:05.833 }, 00:19:05.833 "peer_address": { 00:19:05.833 "trtype": "TCP", 00:19:05.833 "adrfam": "IPv4", 00:19:05.833 "traddr": "10.0.0.1", 00:19:05.833 "trsvcid": "51924" 00:19:05.833 }, 00:19:05.833 "auth": { 00:19:05.833 "state": "completed", 00:19:05.833 "digest": "sha512", 00:19:05.833 "dhgroup": "ffdhe3072" 00:19:05.833 } 00:19:05.833 } 00:19:05.833 ]' 00:19:05.833 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.833 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.833 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.833 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:05.833 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.833 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.833 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.833 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.094 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:19:06.094 16:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:19:06.664 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.664 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.664 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.664 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.664 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.664 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.664 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:06.664 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:06.925 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:19:06.925 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.925 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:06.925 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:06.925 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:06.925 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.925 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:06.925 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.925 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.925 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.925 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:06.925 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:06.925 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:07.186 00:19:07.186 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.186 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.186 16:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.447 16:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.447 16:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.447 16:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.447 16:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.447 16:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.447 16:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.447 { 00:19:07.447 "cntlid": 119, 00:19:07.447 "qid": 0, 00:19:07.447 "state": "enabled", 00:19:07.447 "thread": "nvmf_tgt_poll_group_000", 00:19:07.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:07.447 "listen_address": { 00:19:07.447 "trtype": "TCP", 00:19:07.447 "adrfam": "IPv4", 00:19:07.447 "traddr": "10.0.0.2", 00:19:07.447 "trsvcid": "4420" 00:19:07.447 }, 00:19:07.447 "peer_address": { 00:19:07.447 "trtype": "TCP", 00:19:07.447 "adrfam": "IPv4", 00:19:07.447 "traddr": "10.0.0.1", 00:19:07.447 "trsvcid": "51960" 00:19:07.447 }, 00:19:07.447 "auth": { 00:19:07.447 "state": "completed", 00:19:07.447 "digest": "sha512", 00:19:07.447 "dhgroup": "ffdhe3072" 00:19:07.447 } 00:19:07.447 } 00:19:07.447 ]' 00:19:07.447 16:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.447 16:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.447 16:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.447 16:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:07.447 16:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.447 16:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.447 16:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.447 16:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.707 16:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:19:07.707 16:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:19:08.278 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.278 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.278 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.278 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.278 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.278 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:08.278 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.278 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:08.278 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:08.539 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:19:08.539 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.539 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:08.539 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:08.539 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:08.539 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.539 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.539 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.539 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.539 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.539 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.539 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.539 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.799 00:19:08.799 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.799 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.799 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.059 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.059 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.059 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.059 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.059 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.059 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.059 { 00:19:09.059 "cntlid": 121, 00:19:09.059 "qid": 0, 00:19:09.059 "state": "enabled", 00:19:09.059 "thread": "nvmf_tgt_poll_group_000", 00:19:09.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:09.059 "listen_address": { 00:19:09.059 "trtype": "TCP", 00:19:09.059 "adrfam": "IPv4", 00:19:09.059 "traddr": "10.0.0.2", 00:19:09.059 "trsvcid": "4420" 00:19:09.059 }, 00:19:09.059 "peer_address": { 00:19:09.059 "trtype": "TCP", 00:19:09.059 "adrfam": "IPv4", 00:19:09.059 "traddr": "10.0.0.1", 00:19:09.059 "trsvcid": "51978" 00:19:09.059 }, 00:19:09.059 "auth": { 00:19:09.059 "state": "completed", 00:19:09.059 "digest": "sha512", 00:19:09.059 "dhgroup": "ffdhe4096" 00:19:09.059 } 00:19:09.059 } 00:19:09.059 ]' 00:19:09.059 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.059 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.059 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.059 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:09.059 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.059 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.059 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.059 16:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.320 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:19:09.320 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:19:09.890 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.890 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.890 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.890 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.890 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.890 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.890 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:09.890 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:10.150 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:19:10.150 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.150 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:10.150 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:10.150 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:10.150 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.150 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.150 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.150 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.150 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.150 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.150 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.150 16:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.411 00:19:10.411 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.411 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.411 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.671 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.671 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.671 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.671 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.671 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.671 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.671 { 00:19:10.671 "cntlid": 123, 00:19:10.671 "qid": 0, 00:19:10.671 "state": "enabled", 00:19:10.671 "thread": "nvmf_tgt_poll_group_000", 00:19:10.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:10.671 "listen_address": { 00:19:10.671 "trtype": "TCP", 00:19:10.672 "adrfam": "IPv4", 00:19:10.672 "traddr": "10.0.0.2", 00:19:10.672 "trsvcid": "4420" 00:19:10.672 }, 00:19:10.672 "peer_address": { 00:19:10.672 "trtype": "TCP", 00:19:10.672 "adrfam": "IPv4", 00:19:10.672 "traddr": "10.0.0.1", 00:19:10.672 "trsvcid": "34306" 00:19:10.672 }, 00:19:10.672 "auth": { 00:19:10.672 "state": "completed", 00:19:10.672 "digest": "sha512", 00:19:10.672 "dhgroup": "ffdhe4096" 00:19:10.672 } 00:19:10.672 } 00:19:10.672 ]' 00:19:10.672 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.672 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.672 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.672 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:10.672 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.672 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.672 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.672 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.932 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:19:10.932 16:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:19:11.504 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.504 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:11.504 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.504 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.504 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.504 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.504 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:11.504 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:11.765 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:19:11.765 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.765 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:11.765 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:11.765 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:11.765 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.765 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.765 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.765 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.765 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.765 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.765 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.765 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.026 00:19:12.026 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.026 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.026 16:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.287 16:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.287 16:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.287 16:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.287 16:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.287 16:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.287 16:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.287 { 00:19:12.287 "cntlid": 125, 00:19:12.287 "qid": 0, 00:19:12.287 "state": "enabled", 00:19:12.287 "thread": "nvmf_tgt_poll_group_000", 00:19:12.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:12.287 "listen_address": { 00:19:12.287 "trtype": "TCP", 00:19:12.287 "adrfam": "IPv4", 00:19:12.287 "traddr": "10.0.0.2", 00:19:12.287 "trsvcid": "4420" 00:19:12.287 }, 00:19:12.287 "peer_address": { 00:19:12.287 "trtype": "TCP", 00:19:12.287 "adrfam": "IPv4", 00:19:12.287 "traddr": "10.0.0.1", 00:19:12.287 "trsvcid": "34336" 00:19:12.287 }, 00:19:12.287 "auth": { 00:19:12.287 "state": "completed", 00:19:12.287 "digest": "sha512", 00:19:12.287 "dhgroup": "ffdhe4096" 00:19:12.287 } 00:19:12.287 } 00:19:12.287 ]' 00:19:12.287 16:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.287 16:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.287 16:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.287 16:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:12.287 16:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.287 16:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.287 16:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.287 16:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.547 16:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:19:12.547 16:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:19:13.118 16:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.118 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.118 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.118 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.119 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.119 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.119 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:13.119 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:13.380 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:13.380 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.380 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:13.380 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:13.380 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:13.380 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.380 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:13.380 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.380 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.380 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.380 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:13.380 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.380 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.640 00:19:13.640 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.640 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.640 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.900 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.900 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.900 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.900 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.900 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.900 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.900 { 00:19:13.900 "cntlid": 127, 00:19:13.900 "qid": 0, 00:19:13.900 "state": "enabled", 00:19:13.900 "thread": "nvmf_tgt_poll_group_000", 00:19:13.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:13.900 "listen_address": { 00:19:13.900 "trtype": "TCP", 00:19:13.900 "adrfam": "IPv4", 00:19:13.900 "traddr": "10.0.0.2", 00:19:13.900 "trsvcid": "4420" 00:19:13.900 }, 00:19:13.900 "peer_address": { 00:19:13.900 "trtype": "TCP", 00:19:13.900 "adrfam": "IPv4", 00:19:13.900 "traddr": "10.0.0.1", 00:19:13.900 "trsvcid": "34358" 00:19:13.900 }, 00:19:13.900 "auth": { 00:19:13.900 "state": "completed", 00:19:13.900 "digest": "sha512", 00:19:13.900 "dhgroup": "ffdhe4096" 00:19:13.900 } 00:19:13.900 } 00:19:13.900 ]' 00:19:13.900 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.900 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.900 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.900 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:13.900 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.900 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.900 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.900 16:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.161 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:19:14.161 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:19:14.733 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.733 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.733 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.733 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.733 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.733 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.733 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.733 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:14.733 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:14.995 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:14.995 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.995 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:14.995 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:14.995 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:14.995 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.995 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.995 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.995 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.995 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.995 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.995 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.995 16:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.256 00:19:15.256 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.256 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.256 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.517 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.517 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.517 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.517 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.517 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.517 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.517 { 00:19:15.517 "cntlid": 129, 00:19:15.517 "qid": 0, 00:19:15.517 "state": "enabled", 00:19:15.517 "thread": "nvmf_tgt_poll_group_000", 00:19:15.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:15.517 "listen_address": { 00:19:15.517 "trtype": "TCP", 00:19:15.517 "adrfam": "IPv4", 00:19:15.517 "traddr": "10.0.0.2", 00:19:15.517 "trsvcid": "4420" 00:19:15.517 }, 00:19:15.517 "peer_address": { 00:19:15.517 "trtype": "TCP", 00:19:15.517 "adrfam": "IPv4", 00:19:15.517 "traddr": "10.0.0.1", 00:19:15.517 "trsvcid": "34388" 00:19:15.517 }, 00:19:15.517 "auth": { 00:19:15.517 "state": "completed", 00:19:15.517 "digest": "sha512", 00:19:15.517 "dhgroup": "ffdhe6144" 00:19:15.517 } 00:19:15.517 } 00:19:15.517 ]' 00:19:15.517 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.517 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.517 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.517 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:15.517 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.779 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.779 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.779 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.779 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:19:15.779 16:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:19:16.721 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.721 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.721 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.721 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.721 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.721 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.721 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:16.721 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:16.722 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:16.722 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.722 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:16.722 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:16.722 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:16.722 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.722 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.722 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.722 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.722 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.722 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.722 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.722 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.981 00:19:16.981 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.982 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.982 16:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.242 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.242 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.242 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.242 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.242 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.242 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.242 { 00:19:17.242 "cntlid": 131, 00:19:17.242 "qid": 0, 00:19:17.242 "state": "enabled", 00:19:17.242 "thread": "nvmf_tgt_poll_group_000", 00:19:17.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:17.242 "listen_address": { 00:19:17.242 "trtype": "TCP", 00:19:17.242 "adrfam": "IPv4", 00:19:17.242 "traddr": "10.0.0.2", 00:19:17.242 "trsvcid": "4420" 00:19:17.242 }, 00:19:17.242 "peer_address": { 00:19:17.242 "trtype": "TCP", 00:19:17.242 "adrfam": "IPv4", 00:19:17.242 "traddr": "10.0.0.1", 00:19:17.242 "trsvcid": "34404" 00:19:17.242 }, 00:19:17.242 "auth": { 00:19:17.242 "state": "completed", 00:19:17.242 "digest": "sha512", 00:19:17.242 "dhgroup": "ffdhe6144" 00:19:17.242 } 00:19:17.242 } 00:19:17.242 ]' 00:19:17.242 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.242 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.242 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.242 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:17.242 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.503 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.503 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.503 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.503 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:19:17.503 16:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:19:18.444 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.444 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:18.444 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.444 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.444 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.444 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.444 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:18.444 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:18.444 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:18.444 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.444 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:18.444 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:18.444 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:18.444 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.444 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.444 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.444 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.444 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.444 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.444 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.444 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.704 00:19:18.704 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.704 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.704 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.965 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.965 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.965 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.965 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.965 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.965 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.965 { 00:19:18.965 "cntlid": 133, 00:19:18.965 "qid": 0, 00:19:18.965 "state": "enabled", 00:19:18.965 "thread": "nvmf_tgt_poll_group_000", 00:19:18.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:18.965 "listen_address": { 00:19:18.965 "trtype": "TCP", 00:19:18.965 "adrfam": "IPv4", 00:19:18.965 "traddr": "10.0.0.2", 00:19:18.965 "trsvcid": "4420" 00:19:18.966 }, 00:19:18.966 "peer_address": { 00:19:18.966 "trtype": "TCP", 00:19:18.966 "adrfam": "IPv4", 00:19:18.966 "traddr": "10.0.0.1", 00:19:18.966 "trsvcid": "34436" 00:19:18.966 }, 00:19:18.966 "auth": { 00:19:18.966 "state": "completed", 00:19:18.966 "digest": "sha512", 00:19:18.966 "dhgroup": "ffdhe6144" 00:19:18.966 } 00:19:18.966 } 00:19:18.966 ]' 00:19:18.966 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.966 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.966 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.966 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:18.966 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.226 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.226 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.226 16:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.226 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:19:19.226 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:19:20.168 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.168 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:20.168 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.168 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.168 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.168 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.168 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:20.168 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:20.168 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:20.168 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.168 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:20.168 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:20.168 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:20.168 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.168 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:20.168 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.168 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.168 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.168 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:20.168 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:20.168 16:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:20.434 00:19:20.434 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.434 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.434 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.694 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.694 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.694 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.694 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.694 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.694 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.694 { 00:19:20.694 "cntlid": 135, 00:19:20.694 "qid": 0, 00:19:20.694 "state": "enabled", 00:19:20.694 "thread": "nvmf_tgt_poll_group_000", 00:19:20.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:20.694 "listen_address": { 00:19:20.694 "trtype": "TCP", 00:19:20.694 "adrfam": "IPv4", 00:19:20.694 "traddr": "10.0.0.2", 00:19:20.694 "trsvcid": "4420" 00:19:20.694 }, 00:19:20.694 "peer_address": { 00:19:20.694 "trtype": "TCP", 00:19:20.694 "adrfam": "IPv4", 00:19:20.694 "traddr": "10.0.0.1", 00:19:20.694 "trsvcid": "50464" 00:19:20.694 }, 00:19:20.694 "auth": { 00:19:20.694 "state": "completed", 00:19:20.694 "digest": "sha512", 00:19:20.694 "dhgroup": "ffdhe6144" 00:19:20.694 } 00:19:20.694 } 00:19:20.694 ]' 00:19:20.694 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.694 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.694 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.694 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:20.694 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.954 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.954 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.954 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.954 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:19:20.955 16:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:19:21.894 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.894 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:21.894 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.894 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.894 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.894 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.894 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.894 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:21.894 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:21.894 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:21.894 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.894 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:21.894 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:21.894 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:21.894 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.894 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.894 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.894 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.894 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.894 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.894 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.894 16:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.464 00:19:22.464 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.464 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.464 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.464 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.464 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.464 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.464 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.464 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.464 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.464 { 00:19:22.464 "cntlid": 137, 00:19:22.464 "qid": 0, 00:19:22.464 "state": "enabled", 00:19:22.464 "thread": "nvmf_tgt_poll_group_000", 00:19:22.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:22.464 "listen_address": { 00:19:22.464 "trtype": "TCP", 00:19:22.464 "adrfam": "IPv4", 00:19:22.464 "traddr": "10.0.0.2", 00:19:22.464 "trsvcid": "4420" 00:19:22.464 }, 00:19:22.464 "peer_address": { 00:19:22.464 "trtype": "TCP", 00:19:22.464 "adrfam": "IPv4", 00:19:22.464 "traddr": "10.0.0.1", 00:19:22.464 "trsvcid": "50492" 00:19:22.464 }, 00:19:22.464 "auth": { 00:19:22.464 "state": "completed", 00:19:22.464 "digest": "sha512", 00:19:22.464 "dhgroup": "ffdhe8192" 00:19:22.464 } 00:19:22.464 } 00:19:22.464 ]' 00:19:22.464 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.464 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.464 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.725 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:22.725 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.725 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.725 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.725 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.726 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:19:22.726 16:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:19:23.667 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.667 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.667 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.667 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.667 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.667 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.667 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:23.667 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:23.667 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:23.667 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.667 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:23.667 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:23.667 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:23.667 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.667 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.667 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.667 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.667 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.667 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.667 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.667 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.237 00:19:24.237 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.237 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.237 16:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.237 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.237 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.237 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.237 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.237 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.237 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.237 { 00:19:24.237 "cntlid": 139, 00:19:24.237 "qid": 0, 00:19:24.237 "state": "enabled", 00:19:24.237 "thread": "nvmf_tgt_poll_group_000", 00:19:24.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:24.237 "listen_address": { 00:19:24.237 "trtype": "TCP", 00:19:24.237 "adrfam": "IPv4", 00:19:24.237 "traddr": "10.0.0.2", 00:19:24.237 "trsvcid": "4420" 00:19:24.237 }, 00:19:24.237 "peer_address": { 00:19:24.237 "trtype": "TCP", 00:19:24.237 "adrfam": "IPv4", 00:19:24.237 "traddr": "10.0.0.1", 00:19:24.237 "trsvcid": "50514" 00:19:24.237 }, 00:19:24.237 "auth": { 00:19:24.237 "state": "completed", 00:19:24.237 "digest": "sha512", 00:19:24.237 "dhgroup": "ffdhe8192" 00:19:24.237 } 00:19:24.237 } 00:19:24.237 ]' 00:19:24.237 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.498 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.498 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.498 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:24.498 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.498 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.498 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.498 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.759 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:19:24.759 16:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: --dhchap-ctrl-secret DHHC-1:02:NmVmNzc3Zjk0ZjkyOWY1MDNjMjFmYjBkMzY3ZmVmZWQyZWMwYjk3NGQxNmE0Njg4jfeVIg==: 00:19:25.330 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.330 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.330 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.330 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.330 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.330 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.330 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:25.330 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:25.590 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:25.590 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.590 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:25.590 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:25.590 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:25.590 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.590 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.590 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.590 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.590 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.591 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.591 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.591 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.161 00:19:26.161 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.161 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.161 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.161 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.161 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.161 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.161 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.161 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.161 16:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.161 { 00:19:26.161 "cntlid": 141, 00:19:26.161 "qid": 0, 00:19:26.161 "state": "enabled", 00:19:26.161 "thread": "nvmf_tgt_poll_group_000", 00:19:26.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:26.161 "listen_address": { 00:19:26.161 "trtype": "TCP", 00:19:26.161 "adrfam": "IPv4", 00:19:26.161 "traddr": "10.0.0.2", 00:19:26.161 "trsvcid": "4420" 00:19:26.161 }, 00:19:26.161 "peer_address": { 00:19:26.161 "trtype": "TCP", 00:19:26.161 "adrfam": "IPv4", 00:19:26.161 "traddr": "10.0.0.1", 00:19:26.161 "trsvcid": "50546" 00:19:26.161 }, 00:19:26.161 "auth": { 00:19:26.161 "state": "completed", 00:19:26.161 "digest": "sha512", 00:19:26.161 "dhgroup": "ffdhe8192" 00:19:26.161 } 00:19:26.161 } 00:19:26.161 ]' 00:19:26.161 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.161 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.161 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.161 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:26.161 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.422 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.422 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.422 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.422 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:19:26.422 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:01:ODYwYzEyMzUyNzJhNjVhZWFkZDNjOGNhNGUyYTUyZWYiLH11: 00:19:27.365 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.365 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.365 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.365 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.365 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.365 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.365 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:27.365 16:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:27.365 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:27.365 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.365 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:27.365 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:27.365 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:27.365 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.365 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:27.365 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.365 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.365 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.365 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:27.365 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.365 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.937 00:19:27.937 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.937 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.937 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.937 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.937 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.937 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.938 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.938 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.938 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.938 { 00:19:27.938 "cntlid": 143, 00:19:27.938 "qid": 0, 00:19:27.938 "state": "enabled", 00:19:27.938 "thread": "nvmf_tgt_poll_group_000", 00:19:27.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:27.938 "listen_address": { 00:19:27.938 "trtype": "TCP", 00:19:27.938 "adrfam": "IPv4", 00:19:27.938 "traddr": "10.0.0.2", 00:19:27.938 "trsvcid": "4420" 00:19:27.938 }, 00:19:27.938 "peer_address": { 00:19:27.938 "trtype": "TCP", 00:19:27.938 "adrfam": "IPv4", 00:19:27.938 "traddr": "10.0.0.1", 00:19:27.938 "trsvcid": "50570" 00:19:27.938 }, 00:19:27.938 "auth": { 00:19:27.938 "state": "completed", 00:19:27.938 "digest": "sha512", 00:19:27.938 "dhgroup": "ffdhe8192" 00:19:27.938 } 00:19:27.938 } 00:19:27.938 ]' 00:19:27.938 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.938 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.938 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.198 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:28.198 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.198 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.198 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.198 16:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.198 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:19:28.198 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.140 16:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.711 00:19:29.711 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.711 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.711 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.971 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.971 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.971 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.971 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.971 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.971 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.971 { 00:19:29.971 "cntlid": 145, 00:19:29.971 "qid": 0, 00:19:29.971 "state": "enabled", 00:19:29.971 "thread": "nvmf_tgt_poll_group_000", 00:19:29.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:29.971 "listen_address": { 00:19:29.971 "trtype": "TCP", 00:19:29.971 "adrfam": "IPv4", 00:19:29.971 "traddr": "10.0.0.2", 00:19:29.971 "trsvcid": "4420" 00:19:29.971 }, 00:19:29.971 "peer_address": { 00:19:29.971 "trtype": "TCP", 00:19:29.971 "adrfam": "IPv4", 00:19:29.971 "traddr": "10.0.0.1", 00:19:29.971 "trsvcid": "49732" 00:19:29.971 }, 00:19:29.971 "auth": { 00:19:29.971 "state": "completed", 00:19:29.972 "digest": "sha512", 00:19:29.972 "dhgroup": "ffdhe8192" 00:19:29.972 } 00:19:29.972 } 00:19:29.972 ]' 00:19:29.972 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.972 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.972 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.972 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:29.972 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.972 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.972 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.972 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.232 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:19:30.232 16:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzZlODE0OWY3YWYyNzlhOTYzZmFkOGU4Y2VkOWI5MmY0Y2M1OTY1ZmI2MDQ4ZTZm5JJJeQ==: --dhchap-ctrl-secret DHHC-1:03:MTUyMmQxODVlZTQxZGQ5ZTA2MDc0YmI1MWExMzdjNDhjYzE5NTkyYWU4YjNlMzMxNzU1MmE2YTFmMzQyNWU3N+HvbbQ=: 00:19:30.803 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.803 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:30.803 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.803 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.803 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.803 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:30.803 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.803 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.803 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.803 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:30.803 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:30.803 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:30.803 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:30.803 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.803 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:30.803 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.803 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:30.803 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:30.803 16:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:31.375 request: 00:19:31.375 { 00:19:31.375 "name": "nvme0", 00:19:31.375 "trtype": "tcp", 00:19:31.375 "traddr": "10.0.0.2", 00:19:31.375 "adrfam": "ipv4", 00:19:31.375 "trsvcid": "4420", 00:19:31.375 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:31.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:31.375 "prchk_reftag": false, 00:19:31.375 "prchk_guard": false, 00:19:31.375 "hdgst": false, 00:19:31.375 "ddgst": false, 00:19:31.375 "dhchap_key": "key2", 00:19:31.375 "allow_unrecognized_csi": false, 00:19:31.375 "method": "bdev_nvme_attach_controller", 00:19:31.375 "req_id": 1 00:19:31.375 } 00:19:31.375 Got JSON-RPC error response 00:19:31.375 response: 00:19:31.375 { 00:19:31.375 "code": -5, 00:19:31.375 "message": "Input/output error" 00:19:31.375 } 00:19:31.375 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:31.375 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:31.375 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:31.375 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:31.375 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:31.375 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.375 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.375 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.375 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.375 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.375 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.375 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.375 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:31.375 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:31.375 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:31.375 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:31.375 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.375 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:31.375 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.375 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:31.375 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:31.375 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:31.636 request: 00:19:31.636 { 00:19:31.636 "name": "nvme0", 00:19:31.636 "trtype": "tcp", 00:19:31.636 "traddr": "10.0.0.2", 00:19:31.636 "adrfam": "ipv4", 00:19:31.637 "trsvcid": "4420", 00:19:31.637 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:31.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:31.637 "prchk_reftag": false, 00:19:31.637 "prchk_guard": false, 00:19:31.637 "hdgst": false, 00:19:31.637 "ddgst": false, 00:19:31.637 "dhchap_key": "key1", 00:19:31.637 "dhchap_ctrlr_key": "ckey2", 00:19:31.637 "allow_unrecognized_csi": false, 00:19:31.637 "method": "bdev_nvme_attach_controller", 00:19:31.637 "req_id": 1 00:19:31.637 } 00:19:31.637 Got JSON-RPC error response 00:19:31.637 response: 00:19:31.637 { 00:19:31.637 "code": -5, 00:19:31.637 "message": "Input/output error" 00:19:31.637 } 00:19:31.898 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:31.898 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:31.898 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:31.898 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:31.898 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:31.898 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.898 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.898 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.898 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:31.898 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.898 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.898 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.898 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.898 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:31.898 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.898 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:31.898 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.898 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:31.898 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.898 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.898 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.898 16:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.159 request: 00:19:32.159 { 00:19:32.159 "name": "nvme0", 00:19:32.159 "trtype": "tcp", 00:19:32.159 "traddr": "10.0.0.2", 00:19:32.159 "adrfam": "ipv4", 00:19:32.159 "trsvcid": "4420", 00:19:32.159 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:32.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:32.159 "prchk_reftag": false, 00:19:32.159 "prchk_guard": false, 00:19:32.159 "hdgst": false, 00:19:32.159 "ddgst": false, 00:19:32.159 "dhchap_key": "key1", 00:19:32.159 "dhchap_ctrlr_key": "ckey1", 00:19:32.159 "allow_unrecognized_csi": false, 00:19:32.159 "method": "bdev_nvme_attach_controller", 00:19:32.159 "req_id": 1 00:19:32.159 } 00:19:32.159 Got JSON-RPC error response 00:19:32.159 response: 00:19:32.159 { 00:19:32.159 "code": -5, 00:19:32.159 "message": "Input/output error" 00:19:32.159 } 00:19:32.159 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:32.159 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:32.159 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:32.159 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:32.159 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:32.159 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.159 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.159 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.159 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1253809 00:19:32.159 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1253809 ']' 00:19:32.159 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1253809 00:19:32.159 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:32.159 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.159 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1253809 00:19:32.419 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:32.419 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:32.419 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1253809' 00:19:32.419 killing process with pid 1253809 00:19:32.419 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1253809 00:19:32.419 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1253809 00:19:32.419 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:32.419 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:32.419 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:32.419 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.419 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1280029 00:19:32.419 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:32.419 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1280029 00:19:32.419 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1280029 ']' 00:19:32.419 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.419 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.419 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.419 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.419 16:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.360 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.360 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:33.360 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:33.360 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:33.360 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.360 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.360 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:33.360 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1280029 00:19:33.360 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1280029 ']' 00:19:33.360 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.360 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.360 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.360 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.360 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.620 null0 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gbN 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.alZ ]] 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.alZ 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.uH0 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.yOQ ]] 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yOQ 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Hjy 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.GHm ]] 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GHm 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.hIZ 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:33.620 16:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:34.562 nvme0n1 00:19:34.562 16:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.562 16:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.562 16:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.562 16:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.562 16:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.562 16:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.562 16:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.562 16:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.562 16:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.562 { 00:19:34.562 "cntlid": 1, 00:19:34.562 "qid": 0, 00:19:34.562 "state": "enabled", 00:19:34.562 "thread": "nvmf_tgt_poll_group_000", 00:19:34.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:34.562 "listen_address": { 00:19:34.562 "trtype": "TCP", 00:19:34.562 "adrfam": "IPv4", 00:19:34.562 "traddr": "10.0.0.2", 00:19:34.562 "trsvcid": "4420" 00:19:34.562 }, 00:19:34.562 "peer_address": { 00:19:34.562 "trtype": "TCP", 00:19:34.562 "adrfam": "IPv4", 00:19:34.562 "traddr": "10.0.0.1", 00:19:34.562 "trsvcid": "49784" 00:19:34.562 }, 00:19:34.562 "auth": { 00:19:34.562 "state": "completed", 00:19:34.562 "digest": "sha512", 00:19:34.562 "dhgroup": "ffdhe8192" 00:19:34.562 } 00:19:34.562 } 00:19:34.562 ]' 00:19:34.562 16:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.562 16:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:34.562 16:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.823 16:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:34.823 16:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.823 16:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.823 16:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.823 16:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.823 16:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:19:34.823 16:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:19:35.791 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.791 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:35.791 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.792 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.792 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.792 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:35.792 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.792 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.792 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.792 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:35.792 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:35.792 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:35.792 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:35.792 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:35.792 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:35.792 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:35.792 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:35.792 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:35.792 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:35.792 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:35.792 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.053 request: 00:19:36.053 { 00:19:36.053 "name": "nvme0", 00:19:36.053 "trtype": "tcp", 00:19:36.053 "traddr": "10.0.0.2", 00:19:36.053 "adrfam": "ipv4", 00:19:36.053 "trsvcid": "4420", 00:19:36.053 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:36.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:36.053 "prchk_reftag": false, 00:19:36.053 "prchk_guard": false, 00:19:36.053 "hdgst": false, 00:19:36.053 "ddgst": false, 00:19:36.053 "dhchap_key": "key3", 00:19:36.053 "allow_unrecognized_csi": false, 00:19:36.053 "method": "bdev_nvme_attach_controller", 00:19:36.053 "req_id": 1 00:19:36.053 } 00:19:36.053 Got JSON-RPC error response 00:19:36.053 response: 00:19:36.053 { 00:19:36.053 "code": -5, 00:19:36.053 "message": "Input/output error" 00:19:36.053 } 00:19:36.053 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:36.053 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:36.053 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:36.053 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:36.053 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:36.053 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:36.053 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:36.053 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:36.053 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:36.053 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:36.053 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:36.053 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:36.053 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.053 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:36.053 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.053 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:36.053 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.053 16:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.315 request: 00:19:36.315 { 00:19:36.315 "name": "nvme0", 00:19:36.315 "trtype": "tcp", 00:19:36.315 "traddr": "10.0.0.2", 00:19:36.315 "adrfam": "ipv4", 00:19:36.315 "trsvcid": "4420", 00:19:36.315 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:36.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:36.315 "prchk_reftag": false, 00:19:36.315 "prchk_guard": false, 00:19:36.315 "hdgst": false, 00:19:36.315 "ddgst": false, 00:19:36.315 "dhchap_key": "key3", 00:19:36.315 "allow_unrecognized_csi": false, 00:19:36.315 "method": "bdev_nvme_attach_controller", 00:19:36.315 "req_id": 1 00:19:36.315 } 00:19:36.315 Got JSON-RPC error response 00:19:36.315 response: 00:19:36.315 { 00:19:36.315 "code": -5, 00:19:36.315 "message": "Input/output error" 00:19:36.315 } 00:19:36.315 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:36.315 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:36.315 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:36.315 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:36.315 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:36.315 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:36.315 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:36.315 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:36.315 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:36.315 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:36.576 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.576 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.576 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.576 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.576 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.576 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.576 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.576 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.576 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:36.576 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:36.576 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:36.576 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:36.576 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.576 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:36.576 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.576 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:36.576 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:36.576 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:36.837 request: 00:19:36.837 { 00:19:36.837 "name": "nvme0", 00:19:36.837 "trtype": "tcp", 00:19:36.837 "traddr": "10.0.0.2", 00:19:36.837 "adrfam": "ipv4", 00:19:36.837 "trsvcid": "4420", 00:19:36.837 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:36.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:36.837 "prchk_reftag": false, 00:19:36.837 "prchk_guard": false, 00:19:36.837 "hdgst": false, 00:19:36.837 "ddgst": false, 00:19:36.837 "dhchap_key": "key0", 00:19:36.837 "dhchap_ctrlr_key": "key1", 00:19:36.837 "allow_unrecognized_csi": false, 00:19:36.837 "method": "bdev_nvme_attach_controller", 00:19:36.837 "req_id": 1 00:19:36.837 } 00:19:36.837 Got JSON-RPC error response 00:19:36.837 response: 00:19:36.837 { 00:19:36.837 "code": -5, 00:19:36.837 "message": "Input/output error" 00:19:36.837 } 00:19:36.837 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:36.837 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:36.838 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:36.838 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:36.838 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:36.838 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:36.838 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:37.098 nvme0n1 00:19:37.098 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:37.098 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:37.098 16:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.358 16:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.358 16:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.358 16:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.618 16:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:37.618 16:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.618 16:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.618 16:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.618 16:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:37.618 16:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:37.618 16:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:38.188 nvme0n1 00:19:38.188 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:38.188 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:38.188 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.449 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.449 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:38.449 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.449 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.449 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.449 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:38.449 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.449 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:38.709 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.709 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:19:38.709 16:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: --dhchap-ctrl-secret DHHC-1:03:YTMzN2M2NjIzZTM0Y2E1YWM4MzJmZTMwMjE0ZjlhMWU4MjliYTZmOGUzMDc2YzE3NGMzZGJkZTNmMDcyMzc0Mn/QzCg=: 00:19:39.281 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:39.281 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:39.281 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:39.281 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:39.281 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:39.281 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:39.281 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:39.281 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.281 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.597 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:39.597 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:39.597 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:39.597 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:39.597 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:39.597 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:39.597 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:39.597 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:39.597 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:39.597 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:39.910 request: 00:19:39.910 { 00:19:39.910 "name": "nvme0", 00:19:39.910 "trtype": "tcp", 00:19:39.910 "traddr": "10.0.0.2", 00:19:39.910 "adrfam": "ipv4", 00:19:39.910 "trsvcid": "4420", 00:19:39.910 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:39.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:39.910 "prchk_reftag": false, 00:19:39.910 "prchk_guard": false, 00:19:39.910 "hdgst": false, 00:19:39.910 "ddgst": false, 00:19:39.910 "dhchap_key": "key1", 00:19:39.910 "allow_unrecognized_csi": false, 00:19:39.910 "method": "bdev_nvme_attach_controller", 00:19:39.910 "req_id": 1 00:19:39.910 } 00:19:39.910 Got JSON-RPC error response 00:19:39.910 response: 00:19:39.910 { 00:19:39.910 "code": -5, 00:19:39.910 "message": "Input/output error" 00:19:39.910 } 00:19:39.910 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:39.911 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:39.911 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:39.911 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:39.911 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:39.911 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:39.911 16:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:40.511 nvme0n1 00:19:40.772 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:40.772 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:40.772 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.772 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.772 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.772 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.032 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.032 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.032 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.032 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.032 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:41.032 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:41.032 16:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:41.293 nvme0n1 00:19:41.293 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:41.293 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:41.293 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.553 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.553 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.553 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.553 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:41.553 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.553 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.553 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.553 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: '' 2s 00:19:41.553 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:41.553 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:41.553 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: 00:19:41.553 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:41.553 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:41.553 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:41.553 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: ]] 00:19:41.553 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZDAxYmQ3MTY0NWNjYWQwN2Q1NDBjMDYxZDJlYTgyMzZ+y9Kq: 00:19:41.814 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:41.814 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:41.814 16:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: 2s 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: ]] 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YWEyNGJlZGQ4ODQ2MjBhMWFiYjUwMWU5NTQ0MDQxNzRhYTUzNGE5M2JkNzBjNjJmcTLblg==: 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:43.726 16:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:45.636 16:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:45.636 16:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:45.636 16:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:45.636 16:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:45.636 16:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:45.636 16:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:45.636 16:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:45.636 16:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.898 16:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:45.898 16:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.898 16:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.898 16:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.898 16:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:45.898 16:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:45.898 16:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:46.469 nvme0n1 00:19:46.469 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:46.469 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.469 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.469 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.469 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:46.469 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:47.041 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:47.041 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:47.042 16:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.303 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.303 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:47.303 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.303 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.303 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.303 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:47.303 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:47.303 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:47.303 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:47.303 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.564 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.564 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:47.564 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.564 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.564 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.564 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:47.564 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:47.564 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:47.564 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:47.564 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:47.564 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:47.564 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:47.564 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:47.564 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:48.136 request: 00:19:48.136 { 00:19:48.136 "name": "nvme0", 00:19:48.136 "dhchap_key": "key1", 00:19:48.136 "dhchap_ctrlr_key": "key3", 00:19:48.136 "method": "bdev_nvme_set_keys", 00:19:48.136 "req_id": 1 00:19:48.136 } 00:19:48.136 Got JSON-RPC error response 00:19:48.136 response: 00:19:48.136 { 00:19:48.136 "code": -13, 00:19:48.136 "message": "Permission denied" 00:19:48.136 } 00:19:48.136 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:48.136 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:48.136 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:48.136 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:48.136 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:48.136 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:48.136 16:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.136 16:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:48.136 16:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:49.523 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:49.523 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:49.523 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.523 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:49.523 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:49.523 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.523 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.523 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.523 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:49.523 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:49.523 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:50.093 nvme0n1 00:19:50.093 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:50.093 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.093 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.093 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.093 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:50.093 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:50.094 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:50.094 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:50.094 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.094 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:50.094 16:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.094 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:50.094 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:50.663 request: 00:19:50.663 { 00:19:50.663 "name": "nvme0", 00:19:50.663 "dhchap_key": "key2", 00:19:50.663 "dhchap_ctrlr_key": "key0", 00:19:50.663 "method": "bdev_nvme_set_keys", 00:19:50.663 "req_id": 1 00:19:50.663 } 00:19:50.663 Got JSON-RPC error response 00:19:50.663 response: 00:19:50.663 { 00:19:50.663 "code": -13, 00:19:50.663 "message": "Permission denied" 00:19:50.663 } 00:19:50.663 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:50.663 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:50.663 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:50.663 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:50.663 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:50.663 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.663 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:50.923 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:50.923 16:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:51.864 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:51.864 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:51.864 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.125 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:52.125 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:52.125 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:52.125 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1254110 00:19:52.125 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1254110 ']' 00:19:52.125 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1254110 00:19:52.125 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:52.125 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.125 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1254110 00:19:52.126 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:52.126 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:52.126 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1254110' 00:19:52.126 killing process with pid 1254110 00:19:52.126 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1254110 00:19:52.126 16:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1254110 00:19:52.386 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:52.386 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:52.386 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:52.386 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:52.386 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:52.386 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:52.386 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:52.386 rmmod nvme_tcp 00:19:52.386 rmmod nvme_fabrics 00:19:52.386 rmmod nvme_keyring 00:19:52.386 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:52.386 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:52.386 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:52.386 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1280029 ']' 00:19:52.386 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1280029 00:19:52.386 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1280029 ']' 00:19:52.387 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1280029 00:19:52.387 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:52.387 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.387 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1280029 00:19:52.387 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:52.387 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:52.387 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1280029' 00:19:52.387 killing process with pid 1280029 00:19:52.387 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1280029 00:19:52.387 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1280029 00:19:52.647 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:52.647 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:52.647 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:52.647 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:52.647 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:52.647 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:52.647 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:52.647 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:52.647 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:52.647 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.647 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:52.647 16:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.559 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:54.559 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.gbN /tmp/spdk.key-sha256.uH0 /tmp/spdk.key-sha384.Hjy /tmp/spdk.key-sha512.hIZ /tmp/spdk.key-sha512.alZ /tmp/spdk.key-sha384.yOQ /tmp/spdk.key-sha256.GHm '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:54.559 00:19:54.559 real 2m36.805s 00:19:54.559 user 5m52.565s 00:19:54.559 sys 0m24.788s 00:19:54.559 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.559 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.559 ************************************ 00:19:54.559 END TEST nvmf_auth_target 00:19:54.559 ************************************ 00:19:54.559 16:14:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:54.559 16:14:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:54.559 16:14:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:54.560 16:14:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.560 16:14:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:54.821 ************************************ 00:19:54.821 START TEST nvmf_bdevio_no_huge 00:19:54.821 ************************************ 00:19:54.821 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:54.821 * Looking for test storage... 00:19:54.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:54.821 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:54.821 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:19:54.821 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:54.821 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:54.821 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:54.821 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:54.821 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:54.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.822 --rc genhtml_branch_coverage=1 00:19:54.822 --rc genhtml_function_coverage=1 00:19:54.822 --rc genhtml_legend=1 00:19:54.822 --rc geninfo_all_blocks=1 00:19:54.822 --rc geninfo_unexecuted_blocks=1 00:19:54.822 00:19:54.822 ' 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:54.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.822 --rc genhtml_branch_coverage=1 00:19:54.822 --rc genhtml_function_coverage=1 00:19:54.822 --rc genhtml_legend=1 00:19:54.822 --rc geninfo_all_blocks=1 00:19:54.822 --rc geninfo_unexecuted_blocks=1 00:19:54.822 00:19:54.822 ' 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:54.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.822 --rc genhtml_branch_coverage=1 00:19:54.822 --rc genhtml_function_coverage=1 00:19:54.822 --rc genhtml_legend=1 00:19:54.822 --rc geninfo_all_blocks=1 00:19:54.822 --rc geninfo_unexecuted_blocks=1 00:19:54.822 00:19:54.822 ' 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:54.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.822 --rc genhtml_branch_coverage=1 00:19:54.822 --rc genhtml_function_coverage=1 00:19:54.822 --rc genhtml_legend=1 00:19:54.822 --rc geninfo_all_blocks=1 00:19:54.822 --rc geninfo_unexecuted_blocks=1 00:19:54.822 00:19:54.822 ' 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:54.822 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:54.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:54.823 16:14:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:02.964 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:02.964 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:02.964 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:02.965 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:02.965 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:02.965 16:14:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:02.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:20:02.965 00:20:02.965 --- 10.0.0.2 ping statistics --- 00:20:02.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.965 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:02.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:20:02.965 00:20:02.965 --- 10.0.0.1 ping statistics --- 00:20:02.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.965 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1288193 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1288193 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1288193 ']' 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.965 16:14:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.965 [2024-11-20 16:14:38.293003] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:20:02.965 [2024-11-20 16:14:38.293076] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:02.965 [2024-11-20 16:14:38.401348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:02.965 [2024-11-20 16:14:38.461547] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.965 [2024-11-20 16:14:38.461593] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.965 [2024-11-20 16:14:38.461602] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.965 [2024-11-20 16:14:38.461609] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.965 [2024-11-20 16:14:38.461615] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.965 [2024-11-20 16:14:38.463102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:02.966 [2024-11-20 16:14:38.463264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:02.966 [2024-11-20 16:14:38.463597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:02.966 [2024-11-20 16:14:38.463600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.258 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:03.258 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:20:03.258 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:03.258 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:03.258 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.519 [2024-11-20 16:14:39.172915] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.519 Malloc0 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.519 [2024-11-20 16:14:39.227140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:03.519 { 00:20:03.519 "params": { 00:20:03.519 "name": "Nvme$subsystem", 00:20:03.519 "trtype": "$TEST_TRANSPORT", 00:20:03.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.519 "adrfam": "ipv4", 00:20:03.519 "trsvcid": "$NVMF_PORT", 00:20:03.519 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.519 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.519 "hdgst": ${hdgst:-false}, 00:20:03.519 "ddgst": ${ddgst:-false} 00:20:03.519 }, 00:20:03.519 "method": "bdev_nvme_attach_controller" 00:20:03.519 } 00:20:03.519 EOF 00:20:03.519 )") 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:20:03.519 16:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:03.519 "params": { 00:20:03.519 "name": "Nvme1", 00:20:03.519 "trtype": "tcp", 00:20:03.519 "traddr": "10.0.0.2", 00:20:03.519 "adrfam": "ipv4", 00:20:03.519 "trsvcid": "4420", 00:20:03.519 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.519 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.519 "hdgst": false, 00:20:03.519 "ddgst": false 00:20:03.519 }, 00:20:03.519 "method": "bdev_nvme_attach_controller" 00:20:03.519 }' 00:20:03.519 [2024-11-20 16:14:39.284816] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:20:03.519 [2024-11-20 16:14:39.284893] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1288545 ] 00:20:03.519 [2024-11-20 16:14:39.383997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:03.519 [2024-11-20 16:14:39.444492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.519 [2024-11-20 16:14:39.444654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.519 [2024-11-20 16:14:39.444654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.780 I/O targets: 00:20:03.780 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:03.780 00:20:03.780 00:20:03.780 CUnit - A unit testing framework for C - Version 2.1-3 00:20:03.780 http://cunit.sourceforge.net/ 00:20:03.780 00:20:03.780 00:20:03.780 Suite: bdevio tests on: Nvme1n1 00:20:04.041 Test: blockdev write read block ...passed 00:20:04.041 Test: blockdev write zeroes read block ...passed 00:20:04.041 Test: blockdev write zeroes read no split ...passed 00:20:04.041 Test: blockdev write zeroes read split ...passed 00:20:04.041 Test: blockdev write zeroes read split partial ...passed 00:20:04.041 Test: blockdev reset ...[2024-11-20 16:14:39.838665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:04.041 [2024-11-20 16:14:39.838763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca0800 (9): Bad file descriptor 00:20:04.041 [2024-11-20 16:14:39.856577] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:20:04.041 passed 00:20:04.041 Test: blockdev write read 8 blocks ...passed 00:20:04.041 Test: blockdev write read size > 128k ...passed 00:20:04.041 Test: blockdev write read invalid size ...passed 00:20:04.302 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:04.302 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:04.302 Test: blockdev write read max offset ...passed 00:20:04.302 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:04.302 Test: blockdev writev readv 8 blocks ...passed 00:20:04.302 Test: blockdev writev readv 30 x 1block ...passed 00:20:04.302 Test: blockdev writev readv block ...passed 00:20:04.302 Test: blockdev writev readv size > 128k ...passed 00:20:04.302 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:04.302 Test: blockdev comparev and writev ...[2024-11-20 16:14:40.163419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.302 [2024-11-20 16:14:40.163469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:04.303 [2024-11-20 16:14:40.163486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.303 [2024-11-20 16:14:40.163495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.303 [2024-11-20 16:14:40.164039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.303 [2024-11-20 16:14:40.164053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:04.303 [2024-11-20 16:14:40.164068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.303 [2024-11-20 16:14:40.164077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:04.303 [2024-11-20 16:14:40.164505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.303 [2024-11-20 16:14:40.164517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:04.303 [2024-11-20 16:14:40.164531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.303 [2024-11-20 16:14:40.164540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:04.303 [2024-11-20 16:14:40.165074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.303 [2024-11-20 16:14:40.165085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:04.303 [2024-11-20 16:14:40.165099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.303 [2024-11-20 16:14:40.165107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:04.303 passed 00:20:04.564 Test: blockdev nvme passthru rw ...passed 00:20:04.565 Test: blockdev nvme passthru vendor specific ...[2024-11-20 16:14:40.250011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:04.565 [2024-11-20 16:14:40.250031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:04.565 [2024-11-20 16:14:40.250388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:04.565 [2024-11-20 16:14:40.250400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:04.565 [2024-11-20 16:14:40.250777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:04.565 [2024-11-20 16:14:40.250789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:04.565 [2024-11-20 16:14:40.251176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:04.565 [2024-11-20 16:14:40.251189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:04.565 passed 00:20:04.565 Test: blockdev nvme admin passthru ...passed 00:20:04.565 Test: blockdev copy ...passed 00:20:04.565 00:20:04.565 Run Summary: Type Total Ran Passed Failed Inactive 00:20:04.565 suites 1 1 n/a 0 0 00:20:04.565 tests 23 23 23 0 0 00:20:04.565 asserts 152 152 152 0 n/a 00:20:04.565 00:20:04.565 Elapsed time = 1.295 seconds 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:04.871 rmmod nvme_tcp 00:20:04.871 rmmod nvme_fabrics 00:20:04.871 rmmod nvme_keyring 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1288193 ']' 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1288193 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1288193 ']' 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1288193 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1288193 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1288193' 00:20:04.871 killing process with pid 1288193 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1288193 00:20:04.871 16:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1288193 00:20:05.134 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:05.134 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:05.134 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:05.134 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:20:05.134 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:20:05.134 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:05.134 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:20:05.134 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:05.134 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:05.134 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.134 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:05.134 16:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:07.682 00:20:07.682 real 0m12.612s 00:20:07.682 user 0m14.515s 00:20:07.682 sys 0m6.701s 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:07.682 ************************************ 00:20:07.682 END TEST nvmf_bdevio_no_huge 00:20:07.682 ************************************ 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:07.682 ************************************ 00:20:07.682 START TEST nvmf_tls 00:20:07.682 ************************************ 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:07.682 * Looking for test storage... 00:20:07.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:20:07.682 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:07.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.683 --rc genhtml_branch_coverage=1 00:20:07.683 --rc genhtml_function_coverage=1 00:20:07.683 --rc genhtml_legend=1 00:20:07.683 --rc geninfo_all_blocks=1 00:20:07.683 --rc geninfo_unexecuted_blocks=1 00:20:07.683 00:20:07.683 ' 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:07.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.683 --rc genhtml_branch_coverage=1 00:20:07.683 --rc genhtml_function_coverage=1 00:20:07.683 --rc genhtml_legend=1 00:20:07.683 --rc geninfo_all_blocks=1 00:20:07.683 --rc geninfo_unexecuted_blocks=1 00:20:07.683 00:20:07.683 ' 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:07.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.683 --rc genhtml_branch_coverage=1 00:20:07.683 --rc genhtml_function_coverage=1 00:20:07.683 --rc genhtml_legend=1 00:20:07.683 --rc geninfo_all_blocks=1 00:20:07.683 --rc geninfo_unexecuted_blocks=1 00:20:07.683 00:20:07.683 ' 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:07.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.683 --rc genhtml_branch_coverage=1 00:20:07.683 --rc genhtml_function_coverage=1 00:20:07.683 --rc genhtml_legend=1 00:20:07.683 --rc geninfo_all_blocks=1 00:20:07.683 --rc geninfo_unexecuted_blocks=1 00:20:07.683 00:20:07.683 ' 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:07.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:20:07.683 16:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:15.829 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:15.829 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:15.830 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:15.830 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:15.830 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:15.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:15.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:20:15.830 00:20:15.830 --- 10.0.0.2 ping statistics --- 00:20:15.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.830 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:15.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:15.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:20:15.830 00:20:15.830 --- 10.0.0.1 ping statistics --- 00:20:15.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.830 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:15.830 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:15.831 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:15.831 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:15.831 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:15.831 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.831 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1292899 00:20:15.831 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1292899 00:20:15.831 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:15.831 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1292899 ']' 00:20:15.831 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.831 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:15.831 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.831 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:15.831 16:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.831 [2024-11-20 16:14:51.026624] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:20:15.831 [2024-11-20 16:14:51.026689] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.831 [2024-11-20 16:14:51.129663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.831 [2024-11-20 16:14:51.181759] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.831 [2024-11-20 16:14:51.181812] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.831 [2024-11-20 16:14:51.181821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.831 [2024-11-20 16:14:51.181828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.831 [2024-11-20 16:14:51.181834] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.831 [2024-11-20 16:14:51.182663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.092 16:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.092 16:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:16.092 16:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:16.092 16:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:16.092 16:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.092 16:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.092 16:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:16.092 16:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:16.353 true 00:20:16.353 16:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:16.353 16:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:16.613 16:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:16.613 16:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:16.613 16:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:16.613 16:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:16.613 16:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:16.874 16:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:16.874 16:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:16.874 16:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:17.135 16:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:17.135 16:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:17.135 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:17.135 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:17.135 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:17.135 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:17.396 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:17.396 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:17.396 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:17.656 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:17.656 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:17.916 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:17.916 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:17.916 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:17.916 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:17.916 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:18.176 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:18.176 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:18.176 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:18.177 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:18.177 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:18.177 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:18.177 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:18.177 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:18.177 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:18.177 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:18.177 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:18.177 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:18.177 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:18.177 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:18.177 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:18.177 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:18.177 16:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:18.177 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:18.177 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:18.177 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.cfqPetSoKL 00:20:18.177 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:18.177 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.OLmEZQjq8m 00:20:18.177 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:18.177 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:18.177 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.cfqPetSoKL 00:20:18.177 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.OLmEZQjq8m 00:20:18.177 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:18.438 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:18.698 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.cfqPetSoKL 00:20:18.698 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.cfqPetSoKL 00:20:18.698 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:18.698 [2024-11-20 16:14:54.592287] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.698 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:18.959 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:19.221 [2024-11-20 16:14:54.929109] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:19.221 [2024-11-20 16:14:54.929338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.221 16:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:19.221 malloc0 00:20:19.221 16:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:19.481 16:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.cfqPetSoKL 00:20:19.741 16:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:19.741 16:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.cfqPetSoKL 00:20:31.970 Initializing NVMe Controllers 00:20:31.970 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:31.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:31.970 Initialization complete. Launching workers. 00:20:31.970 ======================================================== 00:20:31.970 Latency(us) 00:20:31.970 Device Information : IOPS MiB/s Average min max 00:20:31.970 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18649.29 72.85 3431.98 1141.71 5122.02 00:20:31.970 ======================================================== 00:20:31.970 Total : 18649.29 72.85 3431.98 1141.71 5122.02 00:20:31.970 00:20:31.970 16:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cfqPetSoKL 00:20:31.970 16:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:31.970 16:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:31.970 16:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:31.970 16:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cfqPetSoKL 00:20:31.970 16:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:31.970 16:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1296051 00:20:31.970 16:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:31.970 16:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1296051 /var/tmp/bdevperf.sock 00:20:31.970 16:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:31.970 16:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1296051 ']' 00:20:31.970 16:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:31.970 16:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:31.970 16:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:31.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:31.970 16:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:31.970 16:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.970 [2024-11-20 16:15:05.787536] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:20:31.970 [2024-11-20 16:15:05.787611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296051 ] 00:20:31.970 [2024-11-20 16:15:05.878277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.970 [2024-11-20 16:15:05.913750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:31.970 16:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:31.970 16:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:31.970 16:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cfqPetSoKL 00:20:31.970 16:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:31.970 [2024-11-20 16:15:06.905467] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:31.970 TLSTESTn1 00:20:31.970 16:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:31.970 Running I/O for 10 seconds... 00:20:33.170 5769.00 IOPS, 22.54 MiB/s [2024-11-20T15:15:10.498Z] 5410.00 IOPS, 21.13 MiB/s [2024-11-20T15:15:11.439Z] 5616.67 IOPS, 21.94 MiB/s [2024-11-20T15:15:12.379Z] 5792.00 IOPS, 22.62 MiB/s [2024-11-20T15:15:13.320Z] 5893.40 IOPS, 23.02 MiB/s [2024-11-20T15:15:14.260Z] 5984.17 IOPS, 23.38 MiB/s [2024-11-20T15:15:15.201Z] 5889.43 IOPS, 23.01 MiB/s [2024-11-20T15:15:16.141Z] 5847.88 IOPS, 22.84 MiB/s [2024-11-20T15:15:17.600Z] 5828.00 IOPS, 22.77 MiB/s [2024-11-20T15:15:17.600Z] 5835.60 IOPS, 22.80 MiB/s 00:20:41.664 Latency(us) 00:20:41.664 [2024-11-20T15:15:17.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.664 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:41.664 Verification LBA range: start 0x0 length 0x2000 00:20:41.664 TLSTESTn1 : 10.02 5835.15 22.79 0.00 0.00 21897.34 4969.81 32549.55 00:20:41.664 [2024-11-20T15:15:17.600Z] =================================================================================================================== 00:20:41.664 [2024-11-20T15:15:17.600Z] Total : 5835.15 22.79 0.00 0.00 21897.34 4969.81 32549.55 00:20:41.664 { 00:20:41.664 "results": [ 00:20:41.664 { 00:20:41.664 "job": "TLSTESTn1", 00:20:41.664 "core_mask": "0x4", 00:20:41.664 "workload": "verify", 00:20:41.664 "status": "finished", 00:20:41.664 "verify_range": { 00:20:41.664 "start": 0, 00:20:41.664 "length": 8192 00:20:41.664 }, 00:20:41.664 "queue_depth": 128, 00:20:41.664 "io_size": 4096, 00:20:41.664 "runtime": 10.022706, 00:20:41.664 "iops": 5835.150706805129, 00:20:41.664 "mibps": 22.793557448457534, 00:20:41.664 "io_failed": 0, 00:20:41.664 "io_timeout": 0, 00:20:41.664 "avg_latency_us": 21897.343886191094, 00:20:41.664 "min_latency_us": 4969.8133333333335, 00:20:41.664 "max_latency_us": 32549.546666666665 00:20:41.664 } 00:20:41.664 ], 00:20:41.664 "core_count": 1 00:20:41.664 } 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1296051 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1296051 ']' 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1296051 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1296051 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1296051' 00:20:41.664 killing process with pid 1296051 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1296051 00:20:41.664 Received shutdown signal, test time was about 10.000000 seconds 00:20:41.664 00:20:41.664 Latency(us) 00:20:41.664 [2024-11-20T15:15:17.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.664 [2024-11-20T15:15:17.600Z] =================================================================================================================== 00:20:41.664 [2024-11-20T15:15:17.600Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1296051 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OLmEZQjq8m 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OLmEZQjq8m 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OLmEZQjq8m 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OLmEZQjq8m 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1298638 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1298638 /var/tmp/bdevperf.sock 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1298638 ']' 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.664 16:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.664 [2024-11-20 16:15:17.383197] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:20:41.664 [2024-11-20 16:15:17.383255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1298638 ] 00:20:41.664 [2024-11-20 16:15:17.464811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.664 [2024-11-20 16:15:17.493575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.254 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.254 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:42.254 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OLmEZQjq8m 00:20:42.514 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:42.774 [2024-11-20 16:15:18.524334] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:42.774 [2024-11-20 16:15:18.528641] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:42.774 [2024-11-20 16:15:18.529307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec0bb0 (107): Transport endpoint is not connected 00:20:42.774 [2024-11-20 16:15:18.530301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec0bb0 (9): Bad file descriptor 00:20:42.774 [2024-11-20 16:15:18.531303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:42.774 [2024-11-20 16:15:18.531314] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:42.774 [2024-11-20 16:15:18.531319] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:42.774 [2024-11-20 16:15:18.531327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:42.774 request: 00:20:42.774 { 00:20:42.774 "name": "TLSTEST", 00:20:42.774 "trtype": "tcp", 00:20:42.774 "traddr": "10.0.0.2", 00:20:42.774 "adrfam": "ipv4", 00:20:42.774 "trsvcid": "4420", 00:20:42.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.774 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:42.774 "prchk_reftag": false, 00:20:42.774 "prchk_guard": false, 00:20:42.774 "hdgst": false, 00:20:42.774 "ddgst": false, 00:20:42.774 "psk": "key0", 00:20:42.774 "allow_unrecognized_csi": false, 00:20:42.774 "method": "bdev_nvme_attach_controller", 00:20:42.774 "req_id": 1 00:20:42.774 } 00:20:42.774 Got JSON-RPC error response 00:20:42.774 response: 00:20:42.774 { 00:20:42.774 "code": -5, 00:20:42.774 "message": "Input/output error" 00:20:42.774 } 00:20:42.774 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1298638 00:20:42.774 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1298638 ']' 00:20:42.774 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1298638 00:20:42.774 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:42.774 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.774 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1298638 00:20:42.774 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:42.774 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:42.774 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1298638' 00:20:42.774 killing process with pid 1298638 00:20:42.774 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1298638 00:20:42.774 Received shutdown signal, test time was about 10.000000 seconds 00:20:42.774 00:20:42.774 Latency(us) 00:20:42.774 [2024-11-20T15:15:18.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.774 [2024-11-20T15:15:18.710Z] =================================================================================================================== 00:20:42.774 [2024-11-20T15:15:18.710Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:42.775 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1298638 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cfqPetSoKL 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cfqPetSoKL 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cfqPetSoKL 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cfqPetSoKL 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1298890 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1298890 /var/tmp/bdevperf.sock 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1298890 ']' 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.035 16:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.035 [2024-11-20 16:15:18.777470] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:20:43.035 [2024-11-20 16:15:18.777525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1298890 ] 00:20:43.035 [2024-11-20 16:15:18.859655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.035 [2024-11-20 16:15:18.888800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.977 16:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.977 16:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:43.977 16:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cfqPetSoKL 00:20:43.977 16:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:43.977 [2024-11-20 16:15:19.879330] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:43.977 [2024-11-20 16:15:19.890473] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:43.977 [2024-11-20 16:15:19.890494] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:43.977 [2024-11-20 16:15:19.890513] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:43.977 [2024-11-20 16:15:19.891426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127dbb0 (107): Transport endpoint is not connected 00:20:43.977 [2024-11-20 16:15:19.892422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127dbb0 (9): Bad file descriptor 00:20:43.977 [2024-11-20 16:15:19.893424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:43.977 [2024-11-20 16:15:19.893431] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:43.977 [2024-11-20 16:15:19.893437] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:43.977 [2024-11-20 16:15:19.893445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:43.977 request: 00:20:43.977 { 00:20:43.977 "name": "TLSTEST", 00:20:43.977 "trtype": "tcp", 00:20:43.977 "traddr": "10.0.0.2", 00:20:43.977 "adrfam": "ipv4", 00:20:43.977 "trsvcid": "4420", 00:20:43.977 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.977 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:43.977 "prchk_reftag": false, 00:20:43.977 "prchk_guard": false, 00:20:43.977 "hdgst": false, 00:20:43.977 "ddgst": false, 00:20:43.977 "psk": "key0", 00:20:43.977 "allow_unrecognized_csi": false, 00:20:43.977 "method": "bdev_nvme_attach_controller", 00:20:43.977 "req_id": 1 00:20:43.977 } 00:20:43.977 Got JSON-RPC error response 00:20:43.977 response: 00:20:43.977 { 00:20:43.977 "code": -5, 00:20:43.977 "message": "Input/output error" 00:20:43.977 } 00:20:44.238 16:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1298890 00:20:44.238 16:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1298890 ']' 00:20:44.238 16:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1298890 00:20:44.238 16:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:44.238 16:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.238 16:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1298890 00:20:44.238 16:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:44.238 16:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:44.238 16:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1298890' 00:20:44.238 killing process with pid 1298890 00:20:44.238 16:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1298890 00:20:44.238 Received shutdown signal, test time was about 10.000000 seconds 00:20:44.238 00:20:44.238 Latency(us) 00:20:44.238 [2024-11-20T15:15:20.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.238 [2024-11-20T15:15:20.174Z] =================================================================================================================== 00:20:44.238 [2024-11-20T15:15:20.174Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:44.238 16:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1298890 00:20:44.238 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cfqPetSoKL 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cfqPetSoKL 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cfqPetSoKL 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cfqPetSoKL 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1299225 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1299225 /var/tmp/bdevperf.sock 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1299225 ']' 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.239 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.239 [2024-11-20 16:15:20.151924] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:20:44.239 [2024-11-20 16:15:20.151998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1299225 ] 00:20:44.499 [2024-11-20 16:15:20.227811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.499 [2024-11-20 16:15:20.267217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.069 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.069 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:45.069 16:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cfqPetSoKL 00:20:45.328 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:45.587 [2024-11-20 16:15:21.311278] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:45.587 [2024-11-20 16:15:21.315855] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:45.587 [2024-11-20 16:15:21.315877] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:45.587 [2024-11-20 16:15:21.315898] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:45.587 [2024-11-20 16:15:21.316546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60cbb0 (107): Transport endpoint is not connected 00:20:45.587 [2024-11-20 16:15:21.317540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60cbb0 (9): Bad file descriptor 00:20:45.587 [2024-11-20 16:15:21.318542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:45.587 [2024-11-20 16:15:21.318549] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:45.587 [2024-11-20 16:15:21.318555] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:45.587 [2024-11-20 16:15:21.318563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:45.587 request: 00:20:45.587 { 00:20:45.587 "name": "TLSTEST", 00:20:45.587 "trtype": "tcp", 00:20:45.587 "traddr": "10.0.0.2", 00:20:45.587 "adrfam": "ipv4", 00:20:45.587 "trsvcid": "4420", 00:20:45.587 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:45.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.587 "prchk_reftag": false, 00:20:45.587 "prchk_guard": false, 00:20:45.587 "hdgst": false, 00:20:45.587 "ddgst": false, 00:20:45.587 "psk": "key0", 00:20:45.587 "allow_unrecognized_csi": false, 00:20:45.587 "method": "bdev_nvme_attach_controller", 00:20:45.587 "req_id": 1 00:20:45.587 } 00:20:45.587 Got JSON-RPC error response 00:20:45.587 response: 00:20:45.587 { 00:20:45.587 "code": -5, 00:20:45.587 "message": "Input/output error" 00:20:45.587 } 00:20:45.587 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1299225 00:20:45.587 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1299225 ']' 00:20:45.587 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1299225 00:20:45.587 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:45.587 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.587 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1299225 00:20:45.587 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1299225' 00:20:45.588 killing process with pid 1299225 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1299225 00:20:45.588 Received shutdown signal, test time was about 10.000000 seconds 00:20:45.588 00:20:45.588 Latency(us) 00:20:45.588 [2024-11-20T15:15:21.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.588 [2024-11-20T15:15:21.524Z] =================================================================================================================== 00:20:45.588 [2024-11-20T15:15:21.524Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1299225 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1299575 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1299575 /var/tmp/bdevperf.sock 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1299575 ']' 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.588 16:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.849 [2024-11-20 16:15:21.561665] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:20:45.849 [2024-11-20 16:15:21.561718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1299575 ] 00:20:45.849 [2024-11-20 16:15:21.644713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.849 [2024-11-20 16:15:21.672617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.790 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.790 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:46.790 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:46.790 [2024-11-20 16:15:22.514896] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:46.791 [2024-11-20 16:15:22.514920] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:46.791 request: 00:20:46.791 { 00:20:46.791 "name": "key0", 00:20:46.791 "path": "", 00:20:46.791 "method": "keyring_file_add_key", 00:20:46.791 "req_id": 1 00:20:46.791 } 00:20:46.791 Got JSON-RPC error response 00:20:46.791 response: 00:20:46.791 { 00:20:46.791 "code": -1, 00:20:46.791 "message": "Operation not permitted" 00:20:46.791 } 00:20:46.791 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:46.791 [2024-11-20 16:15:22.699449] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:46.791 [2024-11-20 16:15:22.699474] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:46.791 request: 00:20:46.791 { 00:20:46.791 "name": "TLSTEST", 00:20:46.791 "trtype": "tcp", 00:20:46.791 "traddr": "10.0.0.2", 00:20:46.791 "adrfam": "ipv4", 00:20:46.791 "trsvcid": "4420", 00:20:46.791 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.791 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.791 "prchk_reftag": false, 00:20:46.791 "prchk_guard": false, 00:20:46.791 "hdgst": false, 00:20:46.791 "ddgst": false, 00:20:46.791 "psk": "key0", 00:20:46.791 "allow_unrecognized_csi": false, 00:20:46.791 "method": "bdev_nvme_attach_controller", 00:20:46.791 "req_id": 1 00:20:46.791 } 00:20:46.791 Got JSON-RPC error response 00:20:46.791 response: 00:20:46.791 { 00:20:46.791 "code": -126, 00:20:46.791 "message": "Required key not available" 00:20:46.791 } 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1299575 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1299575 ']' 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1299575 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1299575 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1299575' 00:20:47.052 killing process with pid 1299575 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1299575 00:20:47.052 Received shutdown signal, test time was about 10.000000 seconds 00:20:47.052 00:20:47.052 Latency(us) 00:20:47.052 [2024-11-20T15:15:22.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.052 [2024-11-20T15:15:22.988Z] =================================================================================================================== 00:20:47.052 [2024-11-20T15:15:22.988Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1299575 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1292899 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1292899 ']' 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1292899 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1292899 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1292899' 00:20:47.052 killing process with pid 1292899 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1292899 00:20:47.052 16:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1292899 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.lD8nt2WzYN 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.lD8nt2WzYN 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1299925 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1299925 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1299925 ']' 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.313 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.313 [2024-11-20 16:15:23.165541] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:20:47.313 [2024-11-20 16:15:23.165603] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.574 [2024-11-20 16:15:23.259977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.574 [2024-11-20 16:15:23.292820] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.574 [2024-11-20 16:15:23.292852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.574 [2024-11-20 16:15:23.292858] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.574 [2024-11-20 16:15:23.292863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.574 [2024-11-20 16:15:23.292867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.574 [2024-11-20 16:15:23.293359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.145 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.145 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:48.145 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:48.145 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:48.145 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.145 16:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.145 16:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.lD8nt2WzYN 00:20:48.145 16:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lD8nt2WzYN 00:20:48.145 16:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:48.405 [2024-11-20 16:15:24.154855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.405 16:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:48.665 16:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:48.665 [2024-11-20 16:15:24.523755] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:48.665 [2024-11-20 16:15:24.523950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.665 16:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:48.925 malloc0 00:20:48.925 16:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:49.186 16:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lD8nt2WzYN 00:20:49.186 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:49.447 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lD8nt2WzYN 00:20:49.447 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:49.447 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:49.447 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:49.447 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lD8nt2WzYN 00:20:49.447 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:49.447 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:49.447 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1300292 00:20:49.447 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:49.447 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1300292 /var/tmp/bdevperf.sock 00:20:49.447 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1300292 ']' 00:20:49.447 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.447 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.447 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.447 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.447 16:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.447 [2024-11-20 16:15:25.318335] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:20:49.447 [2024-11-20 16:15:25.318389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300292 ] 00:20:49.707 [2024-11-20 16:15:25.403519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.708 [2024-11-20 16:15:25.432541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.276 16:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.276 16:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:50.276 16:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lD8nt2WzYN 00:20:50.535 16:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:50.535 [2024-11-20 16:15:26.463092] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:50.794 TLSTESTn1 00:20:50.795 16:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:50.795 Running I/O for 10 seconds... 00:20:53.114 6449.00 IOPS, 25.19 MiB/s [2024-11-20T15:15:29.990Z] 6588.50 IOPS, 25.74 MiB/s [2024-11-20T15:15:30.930Z] 6531.67 IOPS, 25.51 MiB/s [2024-11-20T15:15:31.872Z] 6467.25 IOPS, 25.26 MiB/s [2024-11-20T15:15:32.813Z] 6487.20 IOPS, 25.34 MiB/s [2024-11-20T15:15:33.753Z] 6488.50 IOPS, 25.35 MiB/s [2024-11-20T15:15:34.693Z] 6481.29 IOPS, 25.32 MiB/s [2024-11-20T15:15:36.076Z] 6464.38 IOPS, 25.25 MiB/s [2024-11-20T15:15:37.019Z] 6456.00 IOPS, 25.22 MiB/s [2024-11-20T15:15:37.019Z] 6469.30 IOPS, 25.27 MiB/s 00:21:01.083 Latency(us) 00:21:01.083 [2024-11-20T15:15:37.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.083 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:01.083 Verification LBA range: start 0x0 length 0x2000 00:21:01.083 TLSTESTn1 : 10.02 6471.59 25.28 0.00 0.00 19745.31 7154.35 18896.21 00:21:01.083 [2024-11-20T15:15:37.019Z] =================================================================================================================== 00:21:01.083 [2024-11-20T15:15:37.019Z] Total : 6471.59 25.28 0.00 0.00 19745.31 7154.35 18896.21 00:21:01.083 { 00:21:01.083 "results": [ 00:21:01.083 { 00:21:01.083 "job": "TLSTESTn1", 00:21:01.083 "core_mask": "0x4", 00:21:01.083 "workload": "verify", 00:21:01.083 "status": "finished", 00:21:01.083 "verify_range": { 00:21:01.083 "start": 0, 00:21:01.083 "length": 8192 00:21:01.083 }, 00:21:01.083 "queue_depth": 128, 00:21:01.083 "io_size": 4096, 00:21:01.083 "runtime": 10.015934, 00:21:01.083 "iops": 6471.588171407679, 00:21:01.083 "mibps": 25.279641294561245, 00:21:01.083 "io_failed": 0, 00:21:01.083 "io_timeout": 0, 00:21:01.083 "avg_latency_us": 19745.31463675774, 00:21:01.083 "min_latency_us": 7154.346666666666, 00:21:01.084 "max_latency_us": 18896.213333333333 00:21:01.084 } 00:21:01.084 ], 00:21:01.084 "core_count": 1 00:21:01.084 } 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1300292 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1300292 ']' 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1300292 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1300292 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1300292' 00:21:01.084 killing process with pid 1300292 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1300292 00:21:01.084 Received shutdown signal, test time was about 10.000000 seconds 00:21:01.084 00:21:01.084 Latency(us) 00:21:01.084 [2024-11-20T15:15:37.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.084 [2024-11-20T15:15:37.020Z] =================================================================================================================== 00:21:01.084 [2024-11-20T15:15:37.020Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1300292 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.lD8nt2WzYN 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lD8nt2WzYN 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lD8nt2WzYN 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lD8nt2WzYN 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lD8nt2WzYN 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1302631 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1302631 /var/tmp/bdevperf.sock 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1302631 ']' 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:01.084 16:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.084 [2024-11-20 16:15:36.939687] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:21:01.084 [2024-11-20 16:15:36.939743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302631 ] 00:21:01.351 [2024-11-20 16:15:37.024905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.351 [2024-11-20 16:15:37.053154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.921 16:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.921 16:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:01.921 16:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lD8nt2WzYN 00:21:02.182 [2024-11-20 16:15:37.895458] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lD8nt2WzYN': 0100666 00:21:02.182 [2024-11-20 16:15:37.895483] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:02.182 request: 00:21:02.182 { 00:21:02.182 "name": "key0", 00:21:02.182 "path": "/tmp/tmp.lD8nt2WzYN", 00:21:02.182 "method": "keyring_file_add_key", 00:21:02.182 "req_id": 1 00:21:02.182 } 00:21:02.182 Got JSON-RPC error response 00:21:02.182 response: 00:21:02.182 { 00:21:02.182 "code": -1, 00:21:02.182 "message": "Operation not permitted" 00:21:02.182 } 00:21:02.182 16:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:02.182 [2024-11-20 16:15:38.071976] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.182 [2024-11-20 16:15:38.071999] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:02.182 request: 00:21:02.182 { 00:21:02.182 "name": "TLSTEST", 00:21:02.182 "trtype": "tcp", 00:21:02.182 "traddr": "10.0.0.2", 00:21:02.182 "adrfam": "ipv4", 00:21:02.182 "trsvcid": "4420", 00:21:02.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:02.182 "prchk_reftag": false, 00:21:02.182 "prchk_guard": false, 00:21:02.182 "hdgst": false, 00:21:02.182 "ddgst": false, 00:21:02.182 "psk": "key0", 00:21:02.182 "allow_unrecognized_csi": false, 00:21:02.182 "method": "bdev_nvme_attach_controller", 00:21:02.182 "req_id": 1 00:21:02.182 } 00:21:02.182 Got JSON-RPC error response 00:21:02.182 response: 00:21:02.182 { 00:21:02.182 "code": -126, 00:21:02.182 "message": "Required key not available" 00:21:02.182 } 00:21:02.182 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1302631 00:21:02.182 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1302631 ']' 00:21:02.182 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1302631 00:21:02.182 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:02.182 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.182 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1302631 00:21:02.443 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:02.443 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:02.443 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1302631' 00:21:02.443 killing process with pid 1302631 00:21:02.443 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1302631 00:21:02.443 Received shutdown signal, test time was about 10.000000 seconds 00:21:02.443 00:21:02.443 Latency(us) 00:21:02.443 [2024-11-20T15:15:38.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.443 [2024-11-20T15:15:38.379Z] =================================================================================================================== 00:21:02.443 [2024-11-20T15:15:38.379Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:02.443 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1302631 00:21:02.443 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:02.443 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:02.443 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:02.443 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:02.443 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:02.443 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1299925 00:21:02.443 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1299925 ']' 00:21:02.443 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1299925 00:21:02.443 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:02.443 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.443 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1299925 00:21:02.443 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:02.443 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:02.443 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1299925' 00:21:02.443 killing process with pid 1299925 00:21:02.443 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1299925 00:21:02.443 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1299925 00:21:02.704 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:21:02.704 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:02.704 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:02.704 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.704 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1302892 00:21:02.704 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1302892 00:21:02.704 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:02.704 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1302892 ']' 00:21:02.704 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.704 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.704 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.704 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.704 16:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.704 [2024-11-20 16:15:38.487849] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:21:02.704 [2024-11-20 16:15:38.487908] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.704 [2024-11-20 16:15:38.577369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.704 [2024-11-20 16:15:38.606273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.704 [2024-11-20 16:15:38.606303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.704 [2024-11-20 16:15:38.606309] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.704 [2024-11-20 16:15:38.606313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.704 [2024-11-20 16:15:38.606318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.704 [2024-11-20 16:15:38.606756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.644 16:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.644 16:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:03.644 16:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:03.644 16:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:03.644 16:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.644 16:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.644 16:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.lD8nt2WzYN 00:21:03.644 16:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:03.644 16:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.lD8nt2WzYN 00:21:03.644 16:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:21:03.644 16:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:03.645 16:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:21:03.645 16:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:03.645 16:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.lD8nt2WzYN 00:21:03.645 16:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lD8nt2WzYN 00:21:03.645 16:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:03.645 [2024-11-20 16:15:39.477835] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.645 16:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:03.905 16:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:03.905 [2024-11-20 16:15:39.834716] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:03.905 [2024-11-20 16:15:39.834912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.165 16:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:04.165 malloc0 00:21:04.165 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:04.426 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lD8nt2WzYN 00:21:04.686 [2024-11-20 16:15:40.373844] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lD8nt2WzYN': 0100666 00:21:04.686 [2024-11-20 16:15:40.373866] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:04.686 request: 00:21:04.686 { 00:21:04.686 "name": "key0", 00:21:04.686 "path": "/tmp/tmp.lD8nt2WzYN", 00:21:04.686 "method": "keyring_file_add_key", 00:21:04.686 "req_id": 1 00:21:04.686 } 00:21:04.686 Got JSON-RPC error response 00:21:04.686 response: 00:21:04.686 { 00:21:04.686 "code": -1, 00:21:04.686 "message": "Operation not permitted" 00:21:04.686 } 00:21:04.686 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:04.686 [2024-11-20 16:15:40.558316] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:21:04.686 [2024-11-20 16:15:40.558343] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:04.686 request: 00:21:04.686 { 00:21:04.686 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.686 "host": "nqn.2016-06.io.spdk:host1", 00:21:04.686 "psk": "key0", 00:21:04.686 "method": "nvmf_subsystem_add_host", 00:21:04.686 "req_id": 1 00:21:04.686 } 00:21:04.686 Got JSON-RPC error response 00:21:04.686 response: 00:21:04.686 { 00:21:04.686 "code": -32603, 00:21:04.686 "message": "Internal error" 00:21:04.686 } 00:21:04.686 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:04.686 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:04.686 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:04.686 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:04.686 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1302892 00:21:04.686 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1302892 ']' 00:21:04.686 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1302892 00:21:04.686 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:04.686 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.686 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1302892 00:21:04.946 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:04.946 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:04.946 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1302892' 00:21:04.946 killing process with pid 1302892 00:21:04.946 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1302892 00:21:04.946 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1302892 00:21:04.946 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.lD8nt2WzYN 00:21:04.946 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:21:04.946 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:04.946 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:04.946 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.946 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1303354 00:21:04.946 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1303354 00:21:04.946 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:04.946 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1303354 ']' 00:21:04.946 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.946 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.946 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.946 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.946 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.946 [2024-11-20 16:15:40.822664] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:21:04.946 [2024-11-20 16:15:40.822718] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.207 [2024-11-20 16:15:40.887997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.207 [2024-11-20 16:15:40.915579] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.207 [2024-11-20 16:15:40.915611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.207 [2024-11-20 16:15:40.915617] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.207 [2024-11-20 16:15:40.915622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.207 [2024-11-20 16:15:40.915626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.207 [2024-11-20 16:15:40.916070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.207 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:05.207 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:05.207 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:05.207 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:05.207 16:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.207 16:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.207 16:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.lD8nt2WzYN 00:21:05.207 16:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lD8nt2WzYN 00:21:05.207 16:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:05.467 [2024-11-20 16:15:41.193656] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.467 16:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:05.467 16:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:05.726 [2024-11-20 16:15:41.530483] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:05.726 [2024-11-20 16:15:41.530692] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.726 16:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:05.986 malloc0 00:21:05.986 16:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:05.986 16:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lD8nt2WzYN 00:21:06.247 16:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:06.507 16:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1303719 00:21:06.507 16:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:06.507 16:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:06.507 16:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1303719 /var/tmp/bdevperf.sock 00:21:06.507 16:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1303719 ']' 00:21:06.508 16:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:06.508 16:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:06.508 16:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:06.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:06.508 16:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:06.508 16:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.508 [2024-11-20 16:15:42.253334] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:21:06.508 [2024-11-20 16:15:42.253388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303719 ] 00:21:06.508 [2024-11-20 16:15:42.337499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.508 [2024-11-20 16:15:42.367092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.448 16:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:07.448 16:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:07.448 16:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lD8nt2WzYN 00:21:07.448 16:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:07.448 [2024-11-20 16:15:43.345622] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:07.708 TLSTESTn1 00:21:07.708 16:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:07.969 16:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:21:07.969 "subsystems": [ 00:21:07.969 { 00:21:07.969 "subsystem": "keyring", 00:21:07.969 "config": [ 00:21:07.969 { 00:21:07.969 "method": "keyring_file_add_key", 00:21:07.969 "params": { 00:21:07.969 "name": "key0", 00:21:07.969 "path": "/tmp/tmp.lD8nt2WzYN" 00:21:07.969 } 00:21:07.969 } 00:21:07.969 ] 00:21:07.969 }, 00:21:07.969 { 00:21:07.969 "subsystem": "iobuf", 00:21:07.969 "config": [ 00:21:07.969 { 00:21:07.969 "method": "iobuf_set_options", 00:21:07.969 "params": { 00:21:07.969 "small_pool_count": 8192, 00:21:07.969 "large_pool_count": 1024, 00:21:07.969 "small_bufsize": 8192, 00:21:07.969 "large_bufsize": 135168, 00:21:07.969 "enable_numa": false 00:21:07.969 } 00:21:07.969 } 00:21:07.969 ] 00:21:07.969 }, 00:21:07.969 { 00:21:07.969 "subsystem": "sock", 00:21:07.969 "config": [ 00:21:07.969 { 00:21:07.969 "method": "sock_set_default_impl", 00:21:07.969 "params": { 00:21:07.969 "impl_name": "posix" 00:21:07.969 } 00:21:07.969 }, 00:21:07.969 { 00:21:07.969 "method": "sock_impl_set_options", 00:21:07.969 "params": { 00:21:07.969 "impl_name": "ssl", 00:21:07.969 "recv_buf_size": 4096, 00:21:07.969 "send_buf_size": 4096, 00:21:07.969 "enable_recv_pipe": true, 00:21:07.969 "enable_quickack": false, 00:21:07.969 "enable_placement_id": 0, 00:21:07.969 "enable_zerocopy_send_server": true, 00:21:07.969 "enable_zerocopy_send_client": false, 00:21:07.969 "zerocopy_threshold": 0, 00:21:07.969 "tls_version": 0, 00:21:07.969 "enable_ktls": false 00:21:07.969 } 00:21:07.969 }, 00:21:07.969 { 00:21:07.969 "method": "sock_impl_set_options", 00:21:07.969 "params": { 00:21:07.969 "impl_name": "posix", 00:21:07.969 "recv_buf_size": 2097152, 00:21:07.969 "send_buf_size": 2097152, 00:21:07.969 "enable_recv_pipe": true, 00:21:07.969 "enable_quickack": false, 00:21:07.969 "enable_placement_id": 0, 00:21:07.969 "enable_zerocopy_send_server": true, 00:21:07.969 "enable_zerocopy_send_client": false, 00:21:07.969 "zerocopy_threshold": 0, 00:21:07.969 "tls_version": 0, 00:21:07.969 "enable_ktls": false 00:21:07.969 } 00:21:07.969 } 00:21:07.969 ] 00:21:07.969 }, 00:21:07.969 { 00:21:07.969 "subsystem": "vmd", 00:21:07.969 "config": [] 00:21:07.969 }, 00:21:07.969 { 00:21:07.969 "subsystem": "accel", 00:21:07.969 "config": [ 00:21:07.969 { 00:21:07.969 "method": "accel_set_options", 00:21:07.969 "params": { 00:21:07.969 "small_cache_size": 128, 00:21:07.969 "large_cache_size": 16, 00:21:07.969 "task_count": 2048, 00:21:07.969 "sequence_count": 2048, 00:21:07.969 "buf_count": 2048 00:21:07.969 } 00:21:07.969 } 00:21:07.969 ] 00:21:07.969 }, 00:21:07.969 { 00:21:07.969 "subsystem": "bdev", 00:21:07.969 "config": [ 00:21:07.969 { 00:21:07.969 "method": "bdev_set_options", 00:21:07.969 "params": { 00:21:07.969 "bdev_io_pool_size": 65535, 00:21:07.969 "bdev_io_cache_size": 256, 00:21:07.969 "bdev_auto_examine": true, 00:21:07.969 "iobuf_small_cache_size": 128, 00:21:07.969 "iobuf_large_cache_size": 16 00:21:07.969 } 00:21:07.969 }, 00:21:07.969 { 00:21:07.969 "method": "bdev_raid_set_options", 00:21:07.969 "params": { 00:21:07.969 "process_window_size_kb": 1024, 00:21:07.969 "process_max_bandwidth_mb_sec": 0 00:21:07.969 } 00:21:07.969 }, 00:21:07.969 { 00:21:07.969 "method": "bdev_iscsi_set_options", 00:21:07.969 "params": { 00:21:07.969 "timeout_sec": 30 00:21:07.969 } 00:21:07.969 }, 00:21:07.969 { 00:21:07.969 "method": "bdev_nvme_set_options", 00:21:07.969 "params": { 00:21:07.969 "action_on_timeout": "none", 00:21:07.969 "timeout_us": 0, 00:21:07.969 "timeout_admin_us": 0, 00:21:07.969 "keep_alive_timeout_ms": 10000, 00:21:07.969 "arbitration_burst": 0, 00:21:07.969 "low_priority_weight": 0, 00:21:07.969 "medium_priority_weight": 0, 00:21:07.969 "high_priority_weight": 0, 00:21:07.969 "nvme_adminq_poll_period_us": 10000, 00:21:07.969 "nvme_ioq_poll_period_us": 0, 00:21:07.969 "io_queue_requests": 0, 00:21:07.969 "delay_cmd_submit": true, 00:21:07.969 "transport_retry_count": 4, 00:21:07.969 "bdev_retry_count": 3, 00:21:07.969 "transport_ack_timeout": 0, 00:21:07.969 "ctrlr_loss_timeout_sec": 0, 00:21:07.969 "reconnect_delay_sec": 0, 00:21:07.969 "fast_io_fail_timeout_sec": 0, 00:21:07.969 "disable_auto_failback": false, 00:21:07.969 "generate_uuids": false, 00:21:07.969 "transport_tos": 0, 00:21:07.969 "nvme_error_stat": false, 00:21:07.969 "rdma_srq_size": 0, 00:21:07.969 "io_path_stat": false, 00:21:07.969 "allow_accel_sequence": false, 00:21:07.969 "rdma_max_cq_size": 0, 00:21:07.969 "rdma_cm_event_timeout_ms": 0, 00:21:07.969 "dhchap_digests": [ 00:21:07.969 "sha256", 00:21:07.969 "sha384", 00:21:07.969 "sha512" 00:21:07.969 ], 00:21:07.969 "dhchap_dhgroups": [ 00:21:07.969 "null", 00:21:07.969 "ffdhe2048", 00:21:07.969 "ffdhe3072", 00:21:07.969 "ffdhe4096", 00:21:07.969 "ffdhe6144", 00:21:07.969 "ffdhe8192" 00:21:07.969 ] 00:21:07.969 } 00:21:07.969 }, 00:21:07.969 { 00:21:07.969 "method": "bdev_nvme_set_hotplug", 00:21:07.969 "params": { 00:21:07.969 "period_us": 100000, 00:21:07.969 "enable": false 00:21:07.969 } 00:21:07.969 }, 00:21:07.969 { 00:21:07.969 "method": "bdev_malloc_create", 00:21:07.969 "params": { 00:21:07.969 "name": "malloc0", 00:21:07.969 "num_blocks": 8192, 00:21:07.969 "block_size": 4096, 00:21:07.969 "physical_block_size": 4096, 00:21:07.969 "uuid": "f85c8e49-45d8-4d3e-845f-052190ef3d88", 00:21:07.969 "optimal_io_boundary": 0, 00:21:07.969 "md_size": 0, 00:21:07.969 "dif_type": 0, 00:21:07.969 "dif_is_head_of_md": false, 00:21:07.969 "dif_pi_format": 0 00:21:07.969 } 00:21:07.969 }, 00:21:07.969 { 00:21:07.969 "method": "bdev_wait_for_examine" 00:21:07.969 } 00:21:07.969 ] 00:21:07.969 }, 00:21:07.969 { 00:21:07.969 "subsystem": "nbd", 00:21:07.969 "config": [] 00:21:07.969 }, 00:21:07.969 { 00:21:07.969 "subsystem": "scheduler", 00:21:07.969 "config": [ 00:21:07.969 { 00:21:07.969 "method": "framework_set_scheduler", 00:21:07.969 "params": { 00:21:07.969 "name": "static" 00:21:07.969 } 00:21:07.969 } 00:21:07.969 ] 00:21:07.969 }, 00:21:07.969 { 00:21:07.969 "subsystem": "nvmf", 00:21:07.969 "config": [ 00:21:07.969 { 00:21:07.969 "method": "nvmf_set_config", 00:21:07.969 "params": { 00:21:07.969 "discovery_filter": "match_any", 00:21:07.969 "admin_cmd_passthru": { 00:21:07.969 "identify_ctrlr": false 00:21:07.969 }, 00:21:07.969 "dhchap_digests": [ 00:21:07.969 "sha256", 00:21:07.969 "sha384", 00:21:07.969 "sha512" 00:21:07.969 ], 00:21:07.969 "dhchap_dhgroups": [ 00:21:07.969 "null", 00:21:07.969 "ffdhe2048", 00:21:07.969 "ffdhe3072", 00:21:07.969 "ffdhe4096", 00:21:07.969 "ffdhe6144", 00:21:07.969 "ffdhe8192" 00:21:07.969 ] 00:21:07.969 } 00:21:07.969 }, 00:21:07.969 { 00:21:07.969 "method": "nvmf_set_max_subsystems", 00:21:07.969 "params": { 00:21:07.969 "max_subsystems": 1024 00:21:07.969 } 00:21:07.969 }, 00:21:07.969 { 00:21:07.969 "method": "nvmf_set_crdt", 00:21:07.969 "params": { 00:21:07.969 "crdt1": 0, 00:21:07.969 "crdt2": 0, 00:21:07.969 "crdt3": 0 00:21:07.969 } 00:21:07.969 }, 00:21:07.969 { 00:21:07.969 "method": "nvmf_create_transport", 00:21:07.969 "params": { 00:21:07.969 "trtype": "TCP", 00:21:07.969 "max_queue_depth": 128, 00:21:07.969 "max_io_qpairs_per_ctrlr": 127, 00:21:07.969 "in_capsule_data_size": 4096, 00:21:07.969 "max_io_size": 131072, 00:21:07.969 "io_unit_size": 131072, 00:21:07.969 "max_aq_depth": 128, 00:21:07.969 "num_shared_buffers": 511, 00:21:07.969 "buf_cache_size": 4294967295, 00:21:07.969 "dif_insert_or_strip": false, 00:21:07.969 "zcopy": false, 00:21:07.969 "c2h_success": false, 00:21:07.969 "sock_priority": 0, 00:21:07.969 "abort_timeout_sec": 1, 00:21:07.969 "ack_timeout": 0, 00:21:07.969 "data_wr_pool_size": 0 00:21:07.969 } 00:21:07.969 }, 00:21:07.969 { 00:21:07.969 "method": "nvmf_create_subsystem", 00:21:07.969 "params": { 00:21:07.969 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.969 "allow_any_host": false, 00:21:07.969 "serial_number": "SPDK00000000000001", 00:21:07.969 "model_number": "SPDK bdev Controller", 00:21:07.969 "max_namespaces": 10, 00:21:07.969 "min_cntlid": 1, 00:21:07.969 "max_cntlid": 65519, 00:21:07.969 "ana_reporting": false 00:21:07.969 } 00:21:07.969 }, 00:21:07.969 { 00:21:07.969 "method": "nvmf_subsystem_add_host", 00:21:07.969 "params": { 00:21:07.969 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.969 "host": "nqn.2016-06.io.spdk:host1", 00:21:07.969 "psk": "key0" 00:21:07.969 } 00:21:07.969 }, 00:21:07.969 { 00:21:07.969 "method": "nvmf_subsystem_add_ns", 00:21:07.969 "params": { 00:21:07.969 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.969 "namespace": { 00:21:07.970 "nsid": 1, 00:21:07.970 "bdev_name": "malloc0", 00:21:07.970 "nguid": "F85C8E4945D84D3E845F052190EF3D88", 00:21:07.970 "uuid": "f85c8e49-45d8-4d3e-845f-052190ef3d88", 00:21:07.970 "no_auto_visible": false 00:21:07.970 } 00:21:07.970 } 00:21:07.970 }, 00:21:07.970 { 00:21:07.970 "method": "nvmf_subsystem_add_listener", 00:21:07.970 "params": { 00:21:07.970 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.970 "listen_address": { 00:21:07.970 "trtype": "TCP", 00:21:07.970 "adrfam": "IPv4", 00:21:07.970 "traddr": "10.0.0.2", 00:21:07.970 "trsvcid": "4420" 00:21:07.970 }, 00:21:07.970 "secure_channel": true 00:21:07.970 } 00:21:07.970 } 00:21:07.970 ] 00:21:07.970 } 00:21:07.970 ] 00:21:07.970 }' 00:21:07.970 16:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:08.230 16:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:21:08.230 "subsystems": [ 00:21:08.230 { 00:21:08.230 "subsystem": "keyring", 00:21:08.230 "config": [ 00:21:08.230 { 00:21:08.230 "method": "keyring_file_add_key", 00:21:08.230 "params": { 00:21:08.230 "name": "key0", 00:21:08.230 "path": "/tmp/tmp.lD8nt2WzYN" 00:21:08.230 } 00:21:08.230 } 00:21:08.230 ] 00:21:08.230 }, 00:21:08.230 { 00:21:08.230 "subsystem": "iobuf", 00:21:08.230 "config": [ 00:21:08.230 { 00:21:08.230 "method": "iobuf_set_options", 00:21:08.230 "params": { 00:21:08.230 "small_pool_count": 8192, 00:21:08.230 "large_pool_count": 1024, 00:21:08.230 "small_bufsize": 8192, 00:21:08.230 "large_bufsize": 135168, 00:21:08.230 "enable_numa": false 00:21:08.230 } 00:21:08.230 } 00:21:08.230 ] 00:21:08.230 }, 00:21:08.230 { 00:21:08.230 "subsystem": "sock", 00:21:08.230 "config": [ 00:21:08.230 { 00:21:08.230 "method": "sock_set_default_impl", 00:21:08.230 "params": { 00:21:08.230 "impl_name": "posix" 00:21:08.230 } 00:21:08.230 }, 00:21:08.230 { 00:21:08.230 "method": "sock_impl_set_options", 00:21:08.230 "params": { 00:21:08.230 "impl_name": "ssl", 00:21:08.230 "recv_buf_size": 4096, 00:21:08.230 "send_buf_size": 4096, 00:21:08.230 "enable_recv_pipe": true, 00:21:08.230 "enable_quickack": false, 00:21:08.230 "enable_placement_id": 0, 00:21:08.230 "enable_zerocopy_send_server": true, 00:21:08.230 "enable_zerocopy_send_client": false, 00:21:08.230 "zerocopy_threshold": 0, 00:21:08.230 "tls_version": 0, 00:21:08.230 "enable_ktls": false 00:21:08.230 } 00:21:08.230 }, 00:21:08.230 { 00:21:08.230 "method": "sock_impl_set_options", 00:21:08.230 "params": { 00:21:08.230 "impl_name": "posix", 00:21:08.230 "recv_buf_size": 2097152, 00:21:08.230 "send_buf_size": 2097152, 00:21:08.230 "enable_recv_pipe": true, 00:21:08.230 "enable_quickack": false, 00:21:08.230 "enable_placement_id": 0, 00:21:08.230 "enable_zerocopy_send_server": true, 00:21:08.230 "enable_zerocopy_send_client": false, 00:21:08.230 "zerocopy_threshold": 0, 00:21:08.230 "tls_version": 0, 00:21:08.230 "enable_ktls": false 00:21:08.230 } 00:21:08.230 } 00:21:08.230 ] 00:21:08.230 }, 00:21:08.230 { 00:21:08.230 "subsystem": "vmd", 00:21:08.230 "config": [] 00:21:08.230 }, 00:21:08.230 { 00:21:08.230 "subsystem": "accel", 00:21:08.230 "config": [ 00:21:08.230 { 00:21:08.230 "method": "accel_set_options", 00:21:08.230 "params": { 00:21:08.230 "small_cache_size": 128, 00:21:08.230 "large_cache_size": 16, 00:21:08.230 "task_count": 2048, 00:21:08.231 "sequence_count": 2048, 00:21:08.231 "buf_count": 2048 00:21:08.231 } 00:21:08.231 } 00:21:08.231 ] 00:21:08.231 }, 00:21:08.231 { 00:21:08.231 "subsystem": "bdev", 00:21:08.231 "config": [ 00:21:08.231 { 00:21:08.231 "method": "bdev_set_options", 00:21:08.231 "params": { 00:21:08.231 "bdev_io_pool_size": 65535, 00:21:08.231 "bdev_io_cache_size": 256, 00:21:08.231 "bdev_auto_examine": true, 00:21:08.231 "iobuf_small_cache_size": 128, 00:21:08.231 "iobuf_large_cache_size": 16 00:21:08.231 } 00:21:08.231 }, 00:21:08.231 { 00:21:08.231 "method": "bdev_raid_set_options", 00:21:08.231 "params": { 00:21:08.231 "process_window_size_kb": 1024, 00:21:08.231 "process_max_bandwidth_mb_sec": 0 00:21:08.231 } 00:21:08.231 }, 00:21:08.231 { 00:21:08.231 "method": "bdev_iscsi_set_options", 00:21:08.231 "params": { 00:21:08.231 "timeout_sec": 30 00:21:08.231 } 00:21:08.231 }, 00:21:08.231 { 00:21:08.231 "method": "bdev_nvme_set_options", 00:21:08.231 "params": { 00:21:08.231 "action_on_timeout": "none", 00:21:08.231 "timeout_us": 0, 00:21:08.231 "timeout_admin_us": 0, 00:21:08.231 "keep_alive_timeout_ms": 10000, 00:21:08.231 "arbitration_burst": 0, 00:21:08.231 "low_priority_weight": 0, 00:21:08.231 "medium_priority_weight": 0, 00:21:08.231 "high_priority_weight": 0, 00:21:08.231 "nvme_adminq_poll_period_us": 10000, 00:21:08.231 "nvme_ioq_poll_period_us": 0, 00:21:08.231 "io_queue_requests": 512, 00:21:08.231 "delay_cmd_submit": true, 00:21:08.231 "transport_retry_count": 4, 00:21:08.231 "bdev_retry_count": 3, 00:21:08.231 "transport_ack_timeout": 0, 00:21:08.231 "ctrlr_loss_timeout_sec": 0, 00:21:08.231 "reconnect_delay_sec": 0, 00:21:08.231 "fast_io_fail_timeout_sec": 0, 00:21:08.231 "disable_auto_failback": false, 00:21:08.231 "generate_uuids": false, 00:21:08.231 "transport_tos": 0, 00:21:08.231 "nvme_error_stat": false, 00:21:08.231 "rdma_srq_size": 0, 00:21:08.231 "io_path_stat": false, 00:21:08.231 "allow_accel_sequence": false, 00:21:08.231 "rdma_max_cq_size": 0, 00:21:08.231 "rdma_cm_event_timeout_ms": 0, 00:21:08.231 "dhchap_digests": [ 00:21:08.231 "sha256", 00:21:08.231 "sha384", 00:21:08.231 "sha512" 00:21:08.231 ], 00:21:08.231 "dhchap_dhgroups": [ 00:21:08.231 "null", 00:21:08.231 "ffdhe2048", 00:21:08.231 "ffdhe3072", 00:21:08.231 "ffdhe4096", 00:21:08.231 "ffdhe6144", 00:21:08.231 "ffdhe8192" 00:21:08.231 ] 00:21:08.231 } 00:21:08.231 }, 00:21:08.231 { 00:21:08.231 "method": "bdev_nvme_attach_controller", 00:21:08.231 "params": { 00:21:08.231 "name": "TLSTEST", 00:21:08.231 "trtype": "TCP", 00:21:08.231 "adrfam": "IPv4", 00:21:08.231 "traddr": "10.0.0.2", 00:21:08.231 "trsvcid": "4420", 00:21:08.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.231 "prchk_reftag": false, 00:21:08.231 "prchk_guard": false, 00:21:08.231 "ctrlr_loss_timeout_sec": 0, 00:21:08.231 "reconnect_delay_sec": 0, 00:21:08.231 "fast_io_fail_timeout_sec": 0, 00:21:08.231 "psk": "key0", 00:21:08.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:08.231 "hdgst": false, 00:21:08.231 "ddgst": false, 00:21:08.231 "multipath": "multipath" 00:21:08.231 } 00:21:08.231 }, 00:21:08.231 { 00:21:08.231 "method": "bdev_nvme_set_hotplug", 00:21:08.231 "params": { 00:21:08.231 "period_us": 100000, 00:21:08.231 "enable": false 00:21:08.231 } 00:21:08.231 }, 00:21:08.231 { 00:21:08.231 "method": "bdev_wait_for_examine" 00:21:08.231 } 00:21:08.231 ] 00:21:08.231 }, 00:21:08.231 { 00:21:08.231 "subsystem": "nbd", 00:21:08.231 "config": [] 00:21:08.231 } 00:21:08.231 ] 00:21:08.231 }' 00:21:08.231 16:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1303719 00:21:08.231 16:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1303719 ']' 00:21:08.231 16:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1303719 00:21:08.231 16:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:08.231 16:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:08.231 16:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1303719 00:21:08.231 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:08.231 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:08.231 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1303719' 00:21:08.231 killing process with pid 1303719 00:21:08.231 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1303719 00:21:08.231 Received shutdown signal, test time was about 10.000000 seconds 00:21:08.231 00:21:08.231 Latency(us) 00:21:08.231 [2024-11-20T15:15:44.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.231 [2024-11-20T15:15:44.167Z] =================================================================================================================== 00:21:08.231 [2024-11-20T15:15:44.167Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:08.231 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1303719 00:21:08.231 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1303354 00:21:08.231 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1303354 ']' 00:21:08.231 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1303354 00:21:08.231 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:08.231 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:08.231 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1303354 00:21:08.492 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:08.492 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:08.492 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1303354' 00:21:08.492 killing process with pid 1303354 00:21:08.492 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1303354 00:21:08.492 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1303354 00:21:08.492 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:08.492 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:08.492 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:08.492 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.492 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:21:08.492 "subsystems": [ 00:21:08.492 { 00:21:08.492 "subsystem": "keyring", 00:21:08.492 "config": [ 00:21:08.492 { 00:21:08.492 "method": "keyring_file_add_key", 00:21:08.492 "params": { 00:21:08.492 "name": "key0", 00:21:08.492 "path": "/tmp/tmp.lD8nt2WzYN" 00:21:08.492 } 00:21:08.492 } 00:21:08.492 ] 00:21:08.492 }, 00:21:08.492 { 00:21:08.492 "subsystem": "iobuf", 00:21:08.492 "config": [ 00:21:08.492 { 00:21:08.492 "method": "iobuf_set_options", 00:21:08.492 "params": { 00:21:08.492 "small_pool_count": 8192, 00:21:08.492 "large_pool_count": 1024, 00:21:08.492 "small_bufsize": 8192, 00:21:08.492 "large_bufsize": 135168, 00:21:08.492 "enable_numa": false 00:21:08.492 } 00:21:08.492 } 00:21:08.492 ] 00:21:08.492 }, 00:21:08.492 { 00:21:08.492 "subsystem": "sock", 00:21:08.492 "config": [ 00:21:08.492 { 00:21:08.492 "method": "sock_set_default_impl", 00:21:08.492 "params": { 00:21:08.492 "impl_name": "posix" 00:21:08.492 } 00:21:08.492 }, 00:21:08.492 { 00:21:08.492 "method": "sock_impl_set_options", 00:21:08.492 "params": { 00:21:08.492 "impl_name": "ssl", 00:21:08.492 "recv_buf_size": 4096, 00:21:08.492 "send_buf_size": 4096, 00:21:08.492 "enable_recv_pipe": true, 00:21:08.492 "enable_quickack": false, 00:21:08.492 "enable_placement_id": 0, 00:21:08.492 "enable_zerocopy_send_server": true, 00:21:08.492 "enable_zerocopy_send_client": false, 00:21:08.492 "zerocopy_threshold": 0, 00:21:08.492 "tls_version": 0, 00:21:08.492 "enable_ktls": false 00:21:08.492 } 00:21:08.492 }, 00:21:08.492 { 00:21:08.492 "method": "sock_impl_set_options", 00:21:08.492 "params": { 00:21:08.492 "impl_name": "posix", 00:21:08.492 "recv_buf_size": 2097152, 00:21:08.492 "send_buf_size": 2097152, 00:21:08.492 "enable_recv_pipe": true, 00:21:08.492 "enable_quickack": false, 00:21:08.492 "enable_placement_id": 0, 00:21:08.492 "enable_zerocopy_send_server": true, 00:21:08.492 "enable_zerocopy_send_client": false, 00:21:08.492 "zerocopy_threshold": 0, 00:21:08.492 "tls_version": 0, 00:21:08.492 "enable_ktls": false 00:21:08.492 } 00:21:08.492 } 00:21:08.492 ] 00:21:08.492 }, 00:21:08.492 { 00:21:08.492 "subsystem": "vmd", 00:21:08.492 "config": [] 00:21:08.492 }, 00:21:08.492 { 00:21:08.492 "subsystem": "accel", 00:21:08.492 "config": [ 00:21:08.492 { 00:21:08.492 "method": "accel_set_options", 00:21:08.492 "params": { 00:21:08.492 "small_cache_size": 128, 00:21:08.492 "large_cache_size": 16, 00:21:08.492 "task_count": 2048, 00:21:08.492 "sequence_count": 2048, 00:21:08.492 "buf_count": 2048 00:21:08.492 } 00:21:08.492 } 00:21:08.492 ] 00:21:08.492 }, 00:21:08.492 { 00:21:08.492 "subsystem": "bdev", 00:21:08.492 "config": [ 00:21:08.492 { 00:21:08.492 "method": "bdev_set_options", 00:21:08.492 "params": { 00:21:08.492 "bdev_io_pool_size": 65535, 00:21:08.492 "bdev_io_cache_size": 256, 00:21:08.492 "bdev_auto_examine": true, 00:21:08.492 "iobuf_small_cache_size": 128, 00:21:08.492 "iobuf_large_cache_size": 16 00:21:08.492 } 00:21:08.492 }, 00:21:08.492 { 00:21:08.492 "method": "bdev_raid_set_options", 00:21:08.492 "params": { 00:21:08.492 "process_window_size_kb": 1024, 00:21:08.492 "process_max_bandwidth_mb_sec": 0 00:21:08.492 } 00:21:08.492 }, 00:21:08.492 { 00:21:08.492 "method": "bdev_iscsi_set_options", 00:21:08.492 "params": { 00:21:08.492 "timeout_sec": 30 00:21:08.492 } 00:21:08.492 }, 00:21:08.492 { 00:21:08.492 "method": "bdev_nvme_set_options", 00:21:08.492 "params": { 00:21:08.492 "action_on_timeout": "none", 00:21:08.492 "timeout_us": 0, 00:21:08.492 "timeout_admin_us": 0, 00:21:08.492 "keep_alive_timeout_ms": 10000, 00:21:08.492 "arbitration_burst": 0, 00:21:08.492 "low_priority_weight": 0, 00:21:08.492 "medium_priority_weight": 0, 00:21:08.492 "high_priority_weight": 0, 00:21:08.492 "nvme_adminq_poll_period_us": 10000, 00:21:08.492 "nvme_ioq_poll_period_us": 0, 00:21:08.492 "io_queue_requests": 0, 00:21:08.492 "delay_cmd_submit": true, 00:21:08.492 "transport_retry_count": 4, 00:21:08.492 "bdev_retry_count": 3, 00:21:08.492 "transport_ack_timeout": 0, 00:21:08.492 "ctrlr_loss_timeout_sec": 0, 00:21:08.492 "reconnect_delay_sec": 0, 00:21:08.492 "fast_io_fail_timeout_sec": 0, 00:21:08.492 "disable_auto_failback": false, 00:21:08.492 "generate_uuids": false, 00:21:08.492 "transport_tos": 0, 00:21:08.492 "nvme_error_stat": false, 00:21:08.492 "rdma_srq_size": 0, 00:21:08.492 "io_path_stat": false, 00:21:08.492 "allow_accel_sequence": false, 00:21:08.492 "rdma_max_cq_size": 0, 00:21:08.492 "rdma_cm_event_timeout_ms": 0, 00:21:08.492 "dhchap_digests": [ 00:21:08.492 "sha256", 00:21:08.492 "sha384", 00:21:08.493 "sha512" 00:21:08.493 ], 00:21:08.493 "dhchap_dhgroups": [ 00:21:08.493 "null", 00:21:08.493 "ffdhe2048", 00:21:08.493 "ffdhe3072", 00:21:08.493 "ffdhe4096", 00:21:08.493 "ffdhe6144", 00:21:08.493 "ffdhe8192" 00:21:08.493 ] 00:21:08.493 } 00:21:08.493 }, 00:21:08.493 { 00:21:08.493 "method": "bdev_nvme_set_hotplug", 00:21:08.493 "params": { 00:21:08.493 "period_us": 100000, 00:21:08.493 "enable": false 00:21:08.493 } 00:21:08.493 }, 00:21:08.493 { 00:21:08.493 "method": "bdev_malloc_create", 00:21:08.493 "params": { 00:21:08.493 "name": "malloc0", 00:21:08.493 "num_blocks": 8192, 00:21:08.493 "block_size": 4096, 00:21:08.493 "physical_block_size": 4096, 00:21:08.493 "uuid": "f85c8e49-45d8-4d3e-845f-052190ef3d88", 00:21:08.493 "optimal_io_boundary": 0, 00:21:08.493 "md_size": 0, 00:21:08.493 "dif_type": 0, 00:21:08.493 "dif_is_head_of_md": false, 00:21:08.493 "dif_pi_format": 0 00:21:08.493 } 00:21:08.493 }, 00:21:08.493 { 00:21:08.493 "method": "bdev_wait_for_examine" 00:21:08.493 } 00:21:08.493 ] 00:21:08.493 }, 00:21:08.493 { 00:21:08.493 "subsystem": "nbd", 00:21:08.493 "config": [] 00:21:08.493 }, 00:21:08.493 { 00:21:08.493 "subsystem": "scheduler", 00:21:08.493 "config": [ 00:21:08.493 { 00:21:08.493 "method": "framework_set_scheduler", 00:21:08.493 "params": { 00:21:08.493 "name": "static" 00:21:08.493 } 00:21:08.493 } 00:21:08.493 ] 00:21:08.493 }, 00:21:08.493 { 00:21:08.493 "subsystem": "nvmf", 00:21:08.493 "config": [ 00:21:08.493 { 00:21:08.493 "method": "nvmf_set_config", 00:21:08.493 "params": { 00:21:08.493 "discovery_filter": "match_any", 00:21:08.493 "admin_cmd_passthru": { 00:21:08.493 "identify_ctrlr": false 00:21:08.493 }, 00:21:08.493 "dhchap_digests": [ 00:21:08.493 "sha256", 00:21:08.493 "sha384", 00:21:08.493 "sha512" 00:21:08.493 ], 00:21:08.493 "dhchap_dhgroups": [ 00:21:08.493 "null", 00:21:08.493 "ffdhe2048", 00:21:08.493 "ffdhe3072", 00:21:08.493 "ffdhe4096", 00:21:08.493 "ffdhe6144", 00:21:08.493 "ffdhe8192" 00:21:08.493 ] 00:21:08.493 } 00:21:08.493 }, 00:21:08.493 { 00:21:08.493 "method": "nvmf_set_max_subsystems", 00:21:08.493 "params": { 00:21:08.493 "max_subsystems": 1024 00:21:08.493 } 00:21:08.493 }, 00:21:08.493 { 00:21:08.493 "method": "nvmf_set_crdt", 00:21:08.493 "params": { 00:21:08.493 "crdt1": 0, 00:21:08.493 "crdt2": 0, 00:21:08.493 "crdt3": 0 00:21:08.493 } 00:21:08.493 }, 00:21:08.493 { 00:21:08.493 "method": "nvmf_create_transport", 00:21:08.493 "params": { 00:21:08.493 "trtype": "TCP", 00:21:08.493 "max_queue_depth": 128, 00:21:08.493 "max_io_qpairs_per_ctrlr": 127, 00:21:08.493 "in_capsule_data_size": 4096, 00:21:08.493 "max_io_size": 131072, 00:21:08.493 "io_unit_size": 131072, 00:21:08.493 "max_aq_depth": 128, 00:21:08.493 "num_shared_buffers": 511, 00:21:08.493 "buf_cache_size": 4294967295, 00:21:08.493 "dif_insert_or_strip": false, 00:21:08.493 "zcopy": false, 00:21:08.493 "c2h_success": false, 00:21:08.493 "sock_priority": 0, 00:21:08.493 "abort_timeout_sec": 1, 00:21:08.493 "ack_timeout": 0, 00:21:08.493 "data_wr_pool_size": 0 00:21:08.493 } 00:21:08.493 }, 00:21:08.493 { 00:21:08.493 "method": "nvmf_create_subsystem", 00:21:08.493 "params": { 00:21:08.493 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.493 "allow_any_host": false, 00:21:08.493 "serial_number": "SPDK00000000000001", 00:21:08.493 "model_number": "SPDK bdev Controller", 00:21:08.493 "max_namespaces": 10, 00:21:08.493 "min_cntlid": 1, 00:21:08.493 "max_cntlid": 65519, 00:21:08.493 "ana_reporting": false 00:21:08.493 } 00:21:08.493 }, 00:21:08.493 { 00:21:08.493 "method": "nvmf_subsystem_add_host", 00:21:08.493 "params": { 00:21:08.493 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.493 "host": "nqn.2016-06.io.spdk:host1", 00:21:08.493 "psk": "key0" 00:21:08.493 } 00:21:08.493 }, 00:21:08.493 { 00:21:08.493 "method": "nvmf_subsystem_add_ns", 00:21:08.493 "params": { 00:21:08.493 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.493 "namespace": { 00:21:08.493 "nsid": 1, 00:21:08.493 "bdev_name": "malloc0", 00:21:08.493 "nguid": "F85C8E4945D84D3E845F052190EF3D88", 00:21:08.493 "uuid": "f85c8e49-45d8-4d3e-845f-052190ef3d88", 00:21:08.493 "no_auto_visible": false 00:21:08.493 } 00:21:08.493 } 00:21:08.493 }, 00:21:08.493 { 00:21:08.493 "method": "nvmf_subsystem_add_listener", 00:21:08.493 "params": { 00:21:08.493 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.493 "listen_address": { 00:21:08.493 "trtype": "TCP", 00:21:08.493 "adrfam": "IPv4", 00:21:08.493 "traddr": "10.0.0.2", 00:21:08.493 "trsvcid": "4420" 00:21:08.493 }, 00:21:08.493 "secure_channel": true 00:21:08.493 } 00:21:08.493 } 00:21:08.493 ] 00:21:08.493 } 00:21:08.493 ] 00:21:08.493 }' 00:21:08.493 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1304073 00:21:08.493 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1304073 00:21:08.493 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:08.493 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1304073 ']' 00:21:08.493 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.493 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.493 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.493 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.493 16:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.493 [2024-11-20 16:15:44.340197] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:21:08.493 [2024-11-20 16:15:44.340253] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.754 [2024-11-20 16:15:44.429399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.754 [2024-11-20 16:15:44.457907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.754 [2024-11-20 16:15:44.457939] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.754 [2024-11-20 16:15:44.457945] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.755 [2024-11-20 16:15:44.457949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.755 [2024-11-20 16:15:44.457953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.755 [2024-11-20 16:15:44.458438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.755 [2024-11-20 16:15:44.650788] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.755 [2024-11-20 16:15:44.682819] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:08.755 [2024-11-20 16:15:44.683022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.415 16:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.415 16:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:09.415 16:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:09.415 16:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:09.415 16:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.415 16:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.415 16:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1304245 00:21:09.415 16:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1304245 /var/tmp/bdevperf.sock 00:21:09.415 16:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1304245 ']' 00:21:09.415 16:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:09.415 16:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:09.415 16:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:09.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:09.415 16:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:09.415 16:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:09.415 16:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.415 16:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:21:09.415 "subsystems": [ 00:21:09.415 { 00:21:09.415 "subsystem": "keyring", 00:21:09.415 "config": [ 00:21:09.415 { 00:21:09.415 "method": "keyring_file_add_key", 00:21:09.415 "params": { 00:21:09.415 "name": "key0", 00:21:09.415 "path": "/tmp/tmp.lD8nt2WzYN" 00:21:09.415 } 00:21:09.415 } 00:21:09.415 ] 00:21:09.415 }, 00:21:09.415 { 00:21:09.415 "subsystem": "iobuf", 00:21:09.415 "config": [ 00:21:09.415 { 00:21:09.415 "method": "iobuf_set_options", 00:21:09.415 "params": { 00:21:09.416 "small_pool_count": 8192, 00:21:09.416 "large_pool_count": 1024, 00:21:09.416 "small_bufsize": 8192, 00:21:09.416 "large_bufsize": 135168, 00:21:09.416 "enable_numa": false 00:21:09.416 } 00:21:09.416 } 00:21:09.416 ] 00:21:09.416 }, 00:21:09.416 { 00:21:09.416 "subsystem": "sock", 00:21:09.416 "config": [ 00:21:09.416 { 00:21:09.416 "method": "sock_set_default_impl", 00:21:09.416 "params": { 00:21:09.416 "impl_name": "posix" 00:21:09.416 } 00:21:09.416 }, 00:21:09.416 { 00:21:09.416 "method": "sock_impl_set_options", 00:21:09.416 "params": { 00:21:09.416 "impl_name": "ssl", 00:21:09.416 "recv_buf_size": 4096, 00:21:09.416 "send_buf_size": 4096, 00:21:09.416 "enable_recv_pipe": true, 00:21:09.416 "enable_quickack": false, 00:21:09.416 "enable_placement_id": 0, 00:21:09.416 "enable_zerocopy_send_server": true, 00:21:09.416 "enable_zerocopy_send_client": false, 00:21:09.416 "zerocopy_threshold": 0, 00:21:09.416 "tls_version": 0, 00:21:09.416 "enable_ktls": false 00:21:09.416 } 00:21:09.416 }, 00:21:09.416 { 00:21:09.416 "method": "sock_impl_set_options", 00:21:09.416 "params": { 00:21:09.416 "impl_name": "posix", 00:21:09.416 "recv_buf_size": 2097152, 00:21:09.416 "send_buf_size": 2097152, 00:21:09.416 "enable_recv_pipe": true, 00:21:09.416 "enable_quickack": false, 00:21:09.416 "enable_placement_id": 0, 00:21:09.416 "enable_zerocopy_send_server": true, 00:21:09.416 "enable_zerocopy_send_client": false, 00:21:09.416 "zerocopy_threshold": 0, 00:21:09.416 "tls_version": 0, 00:21:09.416 "enable_ktls": false 00:21:09.416 } 00:21:09.416 } 00:21:09.416 ] 00:21:09.416 }, 00:21:09.416 { 00:21:09.416 "subsystem": "vmd", 00:21:09.416 "config": [] 00:21:09.416 }, 00:21:09.416 { 00:21:09.416 "subsystem": "accel", 00:21:09.416 "config": [ 00:21:09.416 { 00:21:09.416 "method": "accel_set_options", 00:21:09.416 "params": { 00:21:09.416 "small_cache_size": 128, 00:21:09.416 "large_cache_size": 16, 00:21:09.416 "task_count": 2048, 00:21:09.416 "sequence_count": 2048, 00:21:09.416 "buf_count": 2048 00:21:09.416 } 00:21:09.416 } 00:21:09.416 ] 00:21:09.416 }, 00:21:09.416 { 00:21:09.416 "subsystem": "bdev", 00:21:09.416 "config": [ 00:21:09.416 { 00:21:09.416 "method": "bdev_set_options", 00:21:09.416 "params": { 00:21:09.416 "bdev_io_pool_size": 65535, 00:21:09.416 "bdev_io_cache_size": 256, 00:21:09.416 "bdev_auto_examine": true, 00:21:09.416 "iobuf_small_cache_size": 128, 00:21:09.416 "iobuf_large_cache_size": 16 00:21:09.416 } 00:21:09.416 }, 00:21:09.416 { 00:21:09.416 "method": "bdev_raid_set_options", 00:21:09.416 "params": { 00:21:09.416 "process_window_size_kb": 1024, 00:21:09.416 "process_max_bandwidth_mb_sec": 0 00:21:09.416 } 00:21:09.416 }, 00:21:09.416 { 00:21:09.416 "method": "bdev_iscsi_set_options", 00:21:09.416 "params": { 00:21:09.416 "timeout_sec": 30 00:21:09.416 } 00:21:09.416 }, 00:21:09.416 { 00:21:09.416 "method": "bdev_nvme_set_options", 00:21:09.416 "params": { 00:21:09.416 "action_on_timeout": "none", 00:21:09.416 "timeout_us": 0, 00:21:09.416 "timeout_admin_us": 0, 00:21:09.416 "keep_alive_timeout_ms": 10000, 00:21:09.416 "arbitration_burst": 0, 00:21:09.416 "low_priority_weight": 0, 00:21:09.416 "medium_priority_weight": 0, 00:21:09.416 "high_priority_weight": 0, 00:21:09.416 "nvme_adminq_poll_period_us": 10000, 00:21:09.416 "nvme_ioq_poll_period_us": 0, 00:21:09.416 "io_queue_requests": 512, 00:21:09.416 "delay_cmd_submit": true, 00:21:09.416 "transport_retry_count": 4, 00:21:09.416 "bdev_retry_count": 3, 00:21:09.416 "transport_ack_timeout": 0, 00:21:09.416 "ctrlr_loss_timeout_sec": 0, 00:21:09.416 "reconnect_delay_sec": 0, 00:21:09.416 "fast_io_fail_timeout_sec": 0, 00:21:09.416 "disable_auto_failback": false, 00:21:09.416 "generate_uuids": false, 00:21:09.416 "transport_tos": 0, 00:21:09.416 "nvme_error_stat": false, 00:21:09.416 "rdma_srq_size": 0, 00:21:09.416 "io_path_stat": false, 00:21:09.416 "allow_accel_sequence": false, 00:21:09.416 "rdma_max_cq_size": 0, 00:21:09.416 "rdma_cm_event_timeout_ms": 0, 00:21:09.416 "dhchap_digests": [ 00:21:09.416 "sha256", 00:21:09.416 "sha384", 00:21:09.416 "sha512" 00:21:09.416 ], 00:21:09.416 "dhchap_dhgroups": [ 00:21:09.416 "null", 00:21:09.416 "ffdhe2048", 00:21:09.416 "ffdhe3072", 00:21:09.416 "ffdhe4096", 00:21:09.416 "ffdhe6144", 00:21:09.416 "ffdhe8192" 00:21:09.416 ] 00:21:09.416 } 00:21:09.416 }, 00:21:09.416 { 00:21:09.416 "method": "bdev_nvme_attach_controller", 00:21:09.416 "params": { 00:21:09.416 "name": "TLSTEST", 00:21:09.416 "trtype": "TCP", 00:21:09.416 "adrfam": "IPv4", 00:21:09.416 "traddr": "10.0.0.2", 00:21:09.416 "trsvcid": "4420", 00:21:09.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.416 "prchk_reftag": false, 00:21:09.416 "prchk_guard": false, 00:21:09.416 "ctrlr_loss_timeout_sec": 0, 00:21:09.416 "reconnect_delay_sec": 0, 00:21:09.416 "fast_io_fail_timeout_sec": 0, 00:21:09.416 "psk": "key0", 00:21:09.416 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:09.416 "hdgst": false, 00:21:09.416 "ddgst": false, 00:21:09.416 "multipath": "multipath" 00:21:09.416 } 00:21:09.416 }, 00:21:09.416 { 00:21:09.416 "method": "bdev_nvme_set_hotplug", 00:21:09.416 "params": { 00:21:09.416 "period_us": 100000, 00:21:09.416 "enable": false 00:21:09.416 } 00:21:09.416 }, 00:21:09.416 { 00:21:09.416 "method": "bdev_wait_for_examine" 00:21:09.416 } 00:21:09.416 ] 00:21:09.416 }, 00:21:09.416 { 00:21:09.416 "subsystem": "nbd", 00:21:09.416 "config": [] 00:21:09.416 } 00:21:09.416 ] 00:21:09.416 }' 00:21:09.416 [2024-11-20 16:15:45.219597] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:21:09.416 [2024-11-20 16:15:45.219653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304245 ] 00:21:09.416 [2024-11-20 16:15:45.301403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.416 [2024-11-20 16:15:45.330568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:09.676 [2024-11-20 16:15:45.464535] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:10.246 16:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.246 16:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:10.246 16:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:10.246 Running I/O for 10 seconds... 00:21:12.578 4496.00 IOPS, 17.56 MiB/s [2024-11-20T15:15:49.452Z] 5416.50 IOPS, 21.16 MiB/s [2024-11-20T15:15:50.393Z] 5448.00 IOPS, 21.28 MiB/s [2024-11-20T15:15:51.334Z] 5417.75 IOPS, 21.16 MiB/s [2024-11-20T15:15:52.274Z] 5497.60 IOPS, 21.48 MiB/s [2024-11-20T15:15:53.216Z] 5594.33 IOPS, 21.85 MiB/s [2024-11-20T15:15:54.157Z] 5642.00 IOPS, 22.04 MiB/s [2024-11-20T15:15:55.539Z] 5701.25 IOPS, 22.27 MiB/s [2024-11-20T15:15:56.480Z] 5668.89 IOPS, 22.14 MiB/s [2024-11-20T15:15:56.480Z] 5745.40 IOPS, 22.44 MiB/s 00:21:20.544 Latency(us) 00:21:20.544 [2024-11-20T15:15:56.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.544 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:20.544 Verification LBA range: start 0x0 length 0x2000 00:21:20.544 TLSTESTn1 : 10.01 5751.43 22.47 0.00 0.00 22225.59 4560.21 33423.36 00:21:20.544 [2024-11-20T15:15:56.480Z] =================================================================================================================== 00:21:20.544 [2024-11-20T15:15:56.480Z] Total : 5751.43 22.47 0.00 0.00 22225.59 4560.21 33423.36 00:21:20.544 { 00:21:20.544 "results": [ 00:21:20.544 { 00:21:20.544 "job": "TLSTESTn1", 00:21:20.544 "core_mask": "0x4", 00:21:20.544 "workload": "verify", 00:21:20.544 "status": "finished", 00:21:20.544 "verify_range": { 00:21:20.544 "start": 0, 00:21:20.544 "length": 8192 00:21:20.544 }, 00:21:20.544 "queue_depth": 128, 00:21:20.544 "io_size": 4096, 00:21:20.544 "runtime": 10.011593, 00:21:20.544 "iops": 5751.432364459882, 00:21:20.544 "mibps": 22.466532673671413, 00:21:20.544 "io_failed": 0, 00:21:20.544 "io_timeout": 0, 00:21:20.544 "avg_latency_us": 22225.58794972879, 00:21:20.544 "min_latency_us": 4560.213333333333, 00:21:20.544 "max_latency_us": 33423.36 00:21:20.544 } 00:21:20.544 ], 00:21:20.544 "core_count": 1 00:21:20.544 } 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1304245 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1304245 ']' 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1304245 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1304245 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1304245' 00:21:20.544 killing process with pid 1304245 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1304245 00:21:20.544 Received shutdown signal, test time was about 10.000000 seconds 00:21:20.544 00:21:20.544 Latency(us) 00:21:20.544 [2024-11-20T15:15:56.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.544 [2024-11-20T15:15:56.480Z] =================================================================================================================== 00:21:20.544 [2024-11-20T15:15:56.480Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1304245 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1304073 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1304073 ']' 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1304073 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1304073 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1304073' 00:21:20.544 killing process with pid 1304073 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1304073 00:21:20.544 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1304073 00:21:20.805 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:20.806 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:20.806 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:20.806 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.806 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1306445 00:21:20.806 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1306445 00:21:20.806 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:20.806 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1306445 ']' 00:21:20.806 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.806 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:20.806 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.806 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:20.806 16:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.806 [2024-11-20 16:15:56.550768] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:21:20.806 [2024-11-20 16:15:56.550822] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.806 [2024-11-20 16:15:56.645802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.806 [2024-11-20 16:15:56.684311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.806 [2024-11-20 16:15:56.684356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.806 [2024-11-20 16:15:56.684365] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.806 [2024-11-20 16:15:56.684372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.806 [2024-11-20 16:15:56.684378] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.806 [2024-11-20 16:15:56.684996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.749 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:21.749 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:21.749 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:21.749 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:21.749 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.749 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.749 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.lD8nt2WzYN 00:21:21.749 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lD8nt2WzYN 00:21:21.749 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:21.749 [2024-11-20 16:15:57.562542] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.749 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:22.010 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:22.010 [2024-11-20 16:15:57.919444] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:22.010 [2024-11-20 16:15:57.919749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.271 16:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:22.271 malloc0 00:21:22.271 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:22.533 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lD8nt2WzYN 00:21:22.794 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:23.055 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:23.055 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1306813 00:21:23.055 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:23.055 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1306813 /var/tmp/bdevperf.sock 00:21:23.055 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1306813 ']' 00:21:23.055 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:23.055 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.055 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:23.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:23.055 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.055 16:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.055 [2024-11-20 16:15:58.776067] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:21:23.055 [2024-11-20 16:15:58.776135] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306813 ] 00:21:23.055 [2024-11-20 16:15:58.866976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.055 [2024-11-20 16:15:58.901114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.706 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.706 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:23.706 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lD8nt2WzYN 00:21:24.011 16:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:24.011 [2024-11-20 16:15:59.931134] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:24.290 nvme0n1 00:21:24.290 16:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:24.290 Running I/O for 1 seconds... 00:21:25.231 4283.00 IOPS, 16.73 MiB/s 00:21:25.231 Latency(us) 00:21:25.231 [2024-11-20T15:16:01.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.231 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:25.231 Verification LBA range: start 0x0 length 0x2000 00:21:25.231 nvme0n1 : 1.02 4329.16 16.91 0.00 0.00 29345.96 4505.60 47404.37 00:21:25.231 [2024-11-20T15:16:01.167Z] =================================================================================================================== 00:21:25.231 [2024-11-20T15:16:01.167Z] Total : 4329.16 16.91 0.00 0.00 29345.96 4505.60 47404.37 00:21:25.231 { 00:21:25.231 "results": [ 00:21:25.231 { 00:21:25.231 "job": "nvme0n1", 00:21:25.231 "core_mask": "0x2", 00:21:25.231 "workload": "verify", 00:21:25.231 "status": "finished", 00:21:25.231 "verify_range": { 00:21:25.231 "start": 0, 00:21:25.231 "length": 8192 00:21:25.231 }, 00:21:25.231 "queue_depth": 128, 00:21:25.231 "io_size": 4096, 00:21:25.231 "runtime": 1.018905, 00:21:25.231 "iops": 4329.157281591512, 00:21:25.231 "mibps": 16.910770631216845, 00:21:25.231 "io_failed": 0, 00:21:25.231 "io_timeout": 0, 00:21:25.231 "avg_latency_us": 29345.96270535782, 00:21:25.231 "min_latency_us": 4505.6, 00:21:25.231 "max_latency_us": 47404.37333333334 00:21:25.231 } 00:21:25.231 ], 00:21:25.231 "core_count": 1 00:21:25.231 } 00:21:25.231 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1306813 00:21:25.231 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1306813 ']' 00:21:25.231 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1306813 00:21:25.231 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:25.497 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.497 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1306813 00:21:25.497 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:25.497 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:25.497 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1306813' 00:21:25.497 killing process with pid 1306813 00:21:25.497 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1306813 00:21:25.497 Received shutdown signal, test time was about 1.000000 seconds 00:21:25.497 00:21:25.497 Latency(us) 00:21:25.497 [2024-11-20T15:16:01.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.497 [2024-11-20T15:16:01.433Z] =================================================================================================================== 00:21:25.497 [2024-11-20T15:16:01.433Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.497 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1306813 00:21:25.497 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1306445 00:21:25.497 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1306445 ']' 00:21:25.497 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1306445 00:21:25.497 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:25.497 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.497 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1306445 00:21:25.497 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:25.497 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:25.497 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1306445' 00:21:25.497 killing process with pid 1306445 00:21:25.497 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1306445 00:21:25.497 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1306445 00:21:25.758 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:25.758 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:25.758 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:25.758 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.758 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1307488 00:21:25.758 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1307488 00:21:25.758 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:25.758 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1307488 ']' 00:21:25.758 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.758 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:25.758 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.758 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:25.758 16:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.758 [2024-11-20 16:16:01.593454] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:21:25.758 [2024-11-20 16:16:01.593509] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.758 [2024-11-20 16:16:01.683551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.020 [2024-11-20 16:16:01.723497] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.020 [2024-11-20 16:16:01.723545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.020 [2024-11-20 16:16:01.723553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.020 [2024-11-20 16:16:01.723560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.020 [2024-11-20 16:16:01.723566] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.020 [2024-11-20 16:16:01.724230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.592 16:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:26.592 16:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:26.592 16:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:26.592 16:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:26.592 16:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.592 16:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.592 16:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:26.592 16:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.592 16:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.592 [2024-11-20 16:16:02.459010] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.592 malloc0 00:21:26.592 [2024-11-20 16:16:02.489184] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:26.592 [2024-11-20 16:16:02.489525] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.592 16:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.592 16:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1307605 00:21:26.592 16:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1307605 /var/tmp/bdevperf.sock 00:21:26.592 16:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:26.592 16:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1307605 ']' 00:21:26.592 16:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.592 16:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:26.592 16:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.592 16:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:26.592 16:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.853 [2024-11-20 16:16:02.571840] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:21:26.853 [2024-11-20 16:16:02.571908] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307605 ] 00:21:26.854 [2024-11-20 16:16:02.660111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.854 [2024-11-20 16:16:02.694499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.797 16:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.797 16:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:27.797 16:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lD8nt2WzYN 00:21:27.797 16:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:27.797 [2024-11-20 16:16:03.712581] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:28.057 nvme0n1 00:21:28.057 16:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:28.057 Running I/O for 1 seconds... 00:21:29.260 5353.00 IOPS, 20.91 MiB/s 00:21:29.260 Latency(us) 00:21:29.260 [2024-11-20T15:16:05.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.260 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:29.260 Verification LBA range: start 0x0 length 0x2000 00:21:29.260 nvme0n1 : 1.04 5275.13 20.61 0.00 0.00 23866.09 5079.04 37792.43 00:21:29.260 [2024-11-20T15:16:05.196Z] =================================================================================================================== 00:21:29.260 [2024-11-20T15:16:05.196Z] Total : 5275.13 20.61 0.00 0.00 23866.09 5079.04 37792.43 00:21:29.260 { 00:21:29.260 "results": [ 00:21:29.260 { 00:21:29.260 "job": "nvme0n1", 00:21:29.260 "core_mask": "0x2", 00:21:29.260 "workload": "verify", 00:21:29.260 "status": "finished", 00:21:29.260 "verify_range": { 00:21:29.260 "start": 0, 00:21:29.260 "length": 8192 00:21:29.260 }, 00:21:29.260 "queue_depth": 128, 00:21:29.260 "io_size": 4096, 00:21:29.260 "runtime": 1.039026, 00:21:29.260 "iops": 5275.1326723296625, 00:21:29.260 "mibps": 20.605987001287744, 00:21:29.260 "io_failed": 0, 00:21:29.260 "io_timeout": 0, 00:21:29.260 "avg_latency_us": 23866.091433436723, 00:21:29.260 "min_latency_us": 5079.04, 00:21:29.260 "max_latency_us": 37792.426666666666 00:21:29.260 } 00:21:29.260 ], 00:21:29.260 "core_count": 1 00:21:29.260 } 00:21:29.260 16:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:29.260 16:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.260 16:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.260 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.260 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:29.260 "subsystems": [ 00:21:29.260 { 00:21:29.260 "subsystem": "keyring", 00:21:29.260 "config": [ 00:21:29.260 { 00:21:29.260 "method": "keyring_file_add_key", 00:21:29.260 "params": { 00:21:29.260 "name": "key0", 00:21:29.260 "path": "/tmp/tmp.lD8nt2WzYN" 00:21:29.260 } 00:21:29.260 } 00:21:29.260 ] 00:21:29.260 }, 00:21:29.260 { 00:21:29.260 "subsystem": "iobuf", 00:21:29.260 "config": [ 00:21:29.260 { 00:21:29.260 "method": "iobuf_set_options", 00:21:29.260 "params": { 00:21:29.260 "small_pool_count": 8192, 00:21:29.260 "large_pool_count": 1024, 00:21:29.260 "small_bufsize": 8192, 00:21:29.260 "large_bufsize": 135168, 00:21:29.260 "enable_numa": false 00:21:29.260 } 00:21:29.260 } 00:21:29.260 ] 00:21:29.260 }, 00:21:29.260 { 00:21:29.260 "subsystem": "sock", 00:21:29.260 "config": [ 00:21:29.260 { 00:21:29.260 "method": "sock_set_default_impl", 00:21:29.260 "params": { 00:21:29.260 "impl_name": "posix" 00:21:29.260 } 00:21:29.260 }, 00:21:29.260 { 00:21:29.260 "method": "sock_impl_set_options", 00:21:29.260 "params": { 00:21:29.260 "impl_name": "ssl", 00:21:29.260 "recv_buf_size": 4096, 00:21:29.260 "send_buf_size": 4096, 00:21:29.260 "enable_recv_pipe": true, 00:21:29.260 "enable_quickack": false, 00:21:29.260 "enable_placement_id": 0, 00:21:29.260 "enable_zerocopy_send_server": true, 00:21:29.260 "enable_zerocopy_send_client": false, 00:21:29.260 "zerocopy_threshold": 0, 00:21:29.260 "tls_version": 0, 00:21:29.260 "enable_ktls": false 00:21:29.260 } 00:21:29.260 }, 00:21:29.260 { 00:21:29.260 "method": "sock_impl_set_options", 00:21:29.260 "params": { 00:21:29.260 "impl_name": "posix", 00:21:29.260 "recv_buf_size": 2097152, 00:21:29.260 "send_buf_size": 2097152, 00:21:29.260 "enable_recv_pipe": true, 00:21:29.260 "enable_quickack": false, 00:21:29.260 "enable_placement_id": 0, 00:21:29.260 "enable_zerocopy_send_server": true, 00:21:29.260 "enable_zerocopy_send_client": false, 00:21:29.260 "zerocopy_threshold": 0, 00:21:29.260 "tls_version": 0, 00:21:29.260 "enable_ktls": false 00:21:29.260 } 00:21:29.260 } 00:21:29.260 ] 00:21:29.260 }, 00:21:29.260 { 00:21:29.260 "subsystem": "vmd", 00:21:29.260 "config": [] 00:21:29.260 }, 00:21:29.260 { 00:21:29.260 "subsystem": "accel", 00:21:29.260 "config": [ 00:21:29.260 { 00:21:29.260 "method": "accel_set_options", 00:21:29.260 "params": { 00:21:29.260 "small_cache_size": 128, 00:21:29.260 "large_cache_size": 16, 00:21:29.260 "task_count": 2048, 00:21:29.260 "sequence_count": 2048, 00:21:29.260 "buf_count": 2048 00:21:29.260 } 00:21:29.260 } 00:21:29.260 ] 00:21:29.260 }, 00:21:29.260 { 00:21:29.260 "subsystem": "bdev", 00:21:29.260 "config": [ 00:21:29.260 { 00:21:29.260 "method": "bdev_set_options", 00:21:29.260 "params": { 00:21:29.260 "bdev_io_pool_size": 65535, 00:21:29.260 "bdev_io_cache_size": 256, 00:21:29.260 "bdev_auto_examine": true, 00:21:29.260 "iobuf_small_cache_size": 128, 00:21:29.260 "iobuf_large_cache_size": 16 00:21:29.260 } 00:21:29.260 }, 00:21:29.260 { 00:21:29.260 "method": "bdev_raid_set_options", 00:21:29.260 "params": { 00:21:29.260 "process_window_size_kb": 1024, 00:21:29.260 "process_max_bandwidth_mb_sec": 0 00:21:29.260 } 00:21:29.260 }, 00:21:29.260 { 00:21:29.260 "method": "bdev_iscsi_set_options", 00:21:29.260 "params": { 00:21:29.260 "timeout_sec": 30 00:21:29.260 } 00:21:29.260 }, 00:21:29.260 { 00:21:29.260 "method": "bdev_nvme_set_options", 00:21:29.260 "params": { 00:21:29.260 "action_on_timeout": "none", 00:21:29.260 "timeout_us": 0, 00:21:29.260 "timeout_admin_us": 0, 00:21:29.260 "keep_alive_timeout_ms": 10000, 00:21:29.260 "arbitration_burst": 0, 00:21:29.260 "low_priority_weight": 0, 00:21:29.260 "medium_priority_weight": 0, 00:21:29.260 "high_priority_weight": 0, 00:21:29.261 "nvme_adminq_poll_period_us": 10000, 00:21:29.261 "nvme_ioq_poll_period_us": 0, 00:21:29.261 "io_queue_requests": 0, 00:21:29.261 "delay_cmd_submit": true, 00:21:29.261 "transport_retry_count": 4, 00:21:29.261 "bdev_retry_count": 3, 00:21:29.261 "transport_ack_timeout": 0, 00:21:29.261 "ctrlr_loss_timeout_sec": 0, 00:21:29.261 "reconnect_delay_sec": 0, 00:21:29.261 "fast_io_fail_timeout_sec": 0, 00:21:29.261 "disable_auto_failback": false, 00:21:29.261 "generate_uuids": false, 00:21:29.261 "transport_tos": 0, 00:21:29.261 "nvme_error_stat": false, 00:21:29.261 "rdma_srq_size": 0, 00:21:29.261 "io_path_stat": false, 00:21:29.261 "allow_accel_sequence": false, 00:21:29.261 "rdma_max_cq_size": 0, 00:21:29.261 "rdma_cm_event_timeout_ms": 0, 00:21:29.261 "dhchap_digests": [ 00:21:29.261 "sha256", 00:21:29.261 "sha384", 00:21:29.261 "sha512" 00:21:29.261 ], 00:21:29.261 "dhchap_dhgroups": [ 00:21:29.261 "null", 00:21:29.261 "ffdhe2048", 00:21:29.261 "ffdhe3072", 00:21:29.261 "ffdhe4096", 00:21:29.261 "ffdhe6144", 00:21:29.261 "ffdhe8192" 00:21:29.261 ] 00:21:29.261 } 00:21:29.261 }, 00:21:29.261 { 00:21:29.261 "method": "bdev_nvme_set_hotplug", 00:21:29.261 "params": { 00:21:29.261 "period_us": 100000, 00:21:29.261 "enable": false 00:21:29.261 } 00:21:29.261 }, 00:21:29.261 { 00:21:29.261 "method": "bdev_malloc_create", 00:21:29.261 "params": { 00:21:29.261 "name": "malloc0", 00:21:29.261 "num_blocks": 8192, 00:21:29.261 "block_size": 4096, 00:21:29.261 "physical_block_size": 4096, 00:21:29.261 "uuid": "cf4d3106-74ac-4756-8759-adea20c4cb83", 00:21:29.261 "optimal_io_boundary": 0, 00:21:29.261 "md_size": 0, 00:21:29.261 "dif_type": 0, 00:21:29.261 "dif_is_head_of_md": false, 00:21:29.261 "dif_pi_format": 0 00:21:29.261 } 00:21:29.261 }, 00:21:29.261 { 00:21:29.261 "method": "bdev_wait_for_examine" 00:21:29.261 } 00:21:29.261 ] 00:21:29.261 }, 00:21:29.261 { 00:21:29.261 "subsystem": "nbd", 00:21:29.261 "config": [] 00:21:29.261 }, 00:21:29.261 { 00:21:29.261 "subsystem": "scheduler", 00:21:29.261 "config": [ 00:21:29.261 { 00:21:29.261 "method": "framework_set_scheduler", 00:21:29.261 "params": { 00:21:29.261 "name": "static" 00:21:29.261 } 00:21:29.261 } 00:21:29.261 ] 00:21:29.261 }, 00:21:29.261 { 00:21:29.261 "subsystem": "nvmf", 00:21:29.261 "config": [ 00:21:29.261 { 00:21:29.261 "method": "nvmf_set_config", 00:21:29.261 "params": { 00:21:29.261 "discovery_filter": "match_any", 00:21:29.261 "admin_cmd_passthru": { 00:21:29.261 "identify_ctrlr": false 00:21:29.261 }, 00:21:29.261 "dhchap_digests": [ 00:21:29.261 "sha256", 00:21:29.261 "sha384", 00:21:29.261 "sha512" 00:21:29.261 ], 00:21:29.261 "dhchap_dhgroups": [ 00:21:29.261 "null", 00:21:29.261 "ffdhe2048", 00:21:29.261 "ffdhe3072", 00:21:29.261 "ffdhe4096", 00:21:29.261 "ffdhe6144", 00:21:29.261 "ffdhe8192" 00:21:29.261 ] 00:21:29.261 } 00:21:29.261 }, 00:21:29.261 { 00:21:29.261 "method": "nvmf_set_max_subsystems", 00:21:29.261 "params": { 00:21:29.261 "max_subsystems": 1024 00:21:29.261 } 00:21:29.261 }, 00:21:29.261 { 00:21:29.261 "method": "nvmf_set_crdt", 00:21:29.261 "params": { 00:21:29.261 "crdt1": 0, 00:21:29.261 "crdt2": 0, 00:21:29.261 "crdt3": 0 00:21:29.261 } 00:21:29.261 }, 00:21:29.261 { 00:21:29.261 "method": "nvmf_create_transport", 00:21:29.261 "params": { 00:21:29.261 "trtype": "TCP", 00:21:29.261 "max_queue_depth": 128, 00:21:29.261 "max_io_qpairs_per_ctrlr": 127, 00:21:29.261 "in_capsule_data_size": 4096, 00:21:29.261 "max_io_size": 131072, 00:21:29.261 "io_unit_size": 131072, 00:21:29.261 "max_aq_depth": 128, 00:21:29.261 "num_shared_buffers": 511, 00:21:29.261 "buf_cache_size": 4294967295, 00:21:29.261 "dif_insert_or_strip": false, 00:21:29.261 "zcopy": false, 00:21:29.261 "c2h_success": false, 00:21:29.261 "sock_priority": 0, 00:21:29.261 "abort_timeout_sec": 1, 00:21:29.261 "ack_timeout": 0, 00:21:29.261 "data_wr_pool_size": 0 00:21:29.261 } 00:21:29.261 }, 00:21:29.261 { 00:21:29.261 "method": "nvmf_create_subsystem", 00:21:29.261 "params": { 00:21:29.261 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.261 "allow_any_host": false, 00:21:29.261 "serial_number": "00000000000000000000", 00:21:29.261 "model_number": "SPDK bdev Controller", 00:21:29.261 "max_namespaces": 32, 00:21:29.261 "min_cntlid": 1, 00:21:29.261 "max_cntlid": 65519, 00:21:29.261 "ana_reporting": false 00:21:29.261 } 00:21:29.261 }, 00:21:29.261 { 00:21:29.261 "method": "nvmf_subsystem_add_host", 00:21:29.261 "params": { 00:21:29.261 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.261 "host": "nqn.2016-06.io.spdk:host1", 00:21:29.261 "psk": "key0" 00:21:29.261 } 00:21:29.261 }, 00:21:29.261 { 00:21:29.261 "method": "nvmf_subsystem_add_ns", 00:21:29.261 "params": { 00:21:29.261 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.261 "namespace": { 00:21:29.261 "nsid": 1, 00:21:29.261 "bdev_name": "malloc0", 00:21:29.261 "nguid": "CF4D310674AC47568759ADEA20C4CB83", 00:21:29.261 "uuid": "cf4d3106-74ac-4756-8759-adea20c4cb83", 00:21:29.261 "no_auto_visible": false 00:21:29.261 } 00:21:29.261 } 00:21:29.261 }, 00:21:29.261 { 00:21:29.261 "method": "nvmf_subsystem_add_listener", 00:21:29.261 "params": { 00:21:29.261 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.261 "listen_address": { 00:21:29.261 "trtype": "TCP", 00:21:29.261 "adrfam": "IPv4", 00:21:29.261 "traddr": "10.0.0.2", 00:21:29.261 "trsvcid": "4420" 00:21:29.261 }, 00:21:29.261 "secure_channel": false, 00:21:29.261 "sock_impl": "ssl" 00:21:29.261 } 00:21:29.261 } 00:21:29.261 ] 00:21:29.261 } 00:21:29.261 ] 00:21:29.261 }' 00:21:29.261 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:29.522 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:29.522 "subsystems": [ 00:21:29.522 { 00:21:29.522 "subsystem": "keyring", 00:21:29.522 "config": [ 00:21:29.522 { 00:21:29.522 "method": "keyring_file_add_key", 00:21:29.522 "params": { 00:21:29.522 "name": "key0", 00:21:29.522 "path": "/tmp/tmp.lD8nt2WzYN" 00:21:29.522 } 00:21:29.522 } 00:21:29.522 ] 00:21:29.522 }, 00:21:29.522 { 00:21:29.522 "subsystem": "iobuf", 00:21:29.522 "config": [ 00:21:29.522 { 00:21:29.522 "method": "iobuf_set_options", 00:21:29.522 "params": { 00:21:29.522 "small_pool_count": 8192, 00:21:29.522 "large_pool_count": 1024, 00:21:29.522 "small_bufsize": 8192, 00:21:29.522 "large_bufsize": 135168, 00:21:29.522 "enable_numa": false 00:21:29.522 } 00:21:29.522 } 00:21:29.522 ] 00:21:29.522 }, 00:21:29.522 { 00:21:29.522 "subsystem": "sock", 00:21:29.522 "config": [ 00:21:29.522 { 00:21:29.522 "method": "sock_set_default_impl", 00:21:29.522 "params": { 00:21:29.522 "impl_name": "posix" 00:21:29.522 } 00:21:29.522 }, 00:21:29.522 { 00:21:29.522 "method": "sock_impl_set_options", 00:21:29.522 "params": { 00:21:29.522 "impl_name": "ssl", 00:21:29.522 "recv_buf_size": 4096, 00:21:29.522 "send_buf_size": 4096, 00:21:29.522 "enable_recv_pipe": true, 00:21:29.522 "enable_quickack": false, 00:21:29.522 "enable_placement_id": 0, 00:21:29.522 "enable_zerocopy_send_server": true, 00:21:29.522 "enable_zerocopy_send_client": false, 00:21:29.522 "zerocopy_threshold": 0, 00:21:29.522 "tls_version": 0, 00:21:29.522 "enable_ktls": false 00:21:29.522 } 00:21:29.522 }, 00:21:29.522 { 00:21:29.522 "method": "sock_impl_set_options", 00:21:29.522 "params": { 00:21:29.522 "impl_name": "posix", 00:21:29.522 "recv_buf_size": 2097152, 00:21:29.522 "send_buf_size": 2097152, 00:21:29.522 "enable_recv_pipe": true, 00:21:29.522 "enable_quickack": false, 00:21:29.522 "enable_placement_id": 0, 00:21:29.522 "enable_zerocopy_send_server": true, 00:21:29.522 "enable_zerocopy_send_client": false, 00:21:29.522 "zerocopy_threshold": 0, 00:21:29.522 "tls_version": 0, 00:21:29.522 "enable_ktls": false 00:21:29.522 } 00:21:29.522 } 00:21:29.522 ] 00:21:29.522 }, 00:21:29.522 { 00:21:29.522 "subsystem": "vmd", 00:21:29.522 "config": [] 00:21:29.522 }, 00:21:29.522 { 00:21:29.522 "subsystem": "accel", 00:21:29.522 "config": [ 00:21:29.522 { 00:21:29.522 "method": "accel_set_options", 00:21:29.522 "params": { 00:21:29.522 "small_cache_size": 128, 00:21:29.522 "large_cache_size": 16, 00:21:29.522 "task_count": 2048, 00:21:29.522 "sequence_count": 2048, 00:21:29.522 "buf_count": 2048 00:21:29.522 } 00:21:29.522 } 00:21:29.522 ] 00:21:29.522 }, 00:21:29.522 { 00:21:29.522 "subsystem": "bdev", 00:21:29.522 "config": [ 00:21:29.522 { 00:21:29.522 "method": "bdev_set_options", 00:21:29.523 "params": { 00:21:29.523 "bdev_io_pool_size": 65535, 00:21:29.523 "bdev_io_cache_size": 256, 00:21:29.523 "bdev_auto_examine": true, 00:21:29.523 "iobuf_small_cache_size": 128, 00:21:29.523 "iobuf_large_cache_size": 16 00:21:29.523 } 00:21:29.523 }, 00:21:29.523 { 00:21:29.523 "method": "bdev_raid_set_options", 00:21:29.523 "params": { 00:21:29.523 "process_window_size_kb": 1024, 00:21:29.523 "process_max_bandwidth_mb_sec": 0 00:21:29.523 } 00:21:29.523 }, 00:21:29.523 { 00:21:29.523 "method": "bdev_iscsi_set_options", 00:21:29.523 "params": { 00:21:29.523 "timeout_sec": 30 00:21:29.523 } 00:21:29.523 }, 00:21:29.523 { 00:21:29.523 "method": "bdev_nvme_set_options", 00:21:29.523 "params": { 00:21:29.523 "action_on_timeout": "none", 00:21:29.523 "timeout_us": 0, 00:21:29.523 "timeout_admin_us": 0, 00:21:29.523 "keep_alive_timeout_ms": 10000, 00:21:29.523 "arbitration_burst": 0, 00:21:29.523 "low_priority_weight": 0, 00:21:29.523 "medium_priority_weight": 0, 00:21:29.523 "high_priority_weight": 0, 00:21:29.523 "nvme_adminq_poll_period_us": 10000, 00:21:29.523 "nvme_ioq_poll_period_us": 0, 00:21:29.523 "io_queue_requests": 512, 00:21:29.523 "delay_cmd_submit": true, 00:21:29.523 "transport_retry_count": 4, 00:21:29.523 "bdev_retry_count": 3, 00:21:29.523 "transport_ack_timeout": 0, 00:21:29.523 "ctrlr_loss_timeout_sec": 0, 00:21:29.523 "reconnect_delay_sec": 0, 00:21:29.523 "fast_io_fail_timeout_sec": 0, 00:21:29.523 "disable_auto_failback": false, 00:21:29.523 "generate_uuids": false, 00:21:29.523 "transport_tos": 0, 00:21:29.523 "nvme_error_stat": false, 00:21:29.523 "rdma_srq_size": 0, 00:21:29.523 "io_path_stat": false, 00:21:29.523 "allow_accel_sequence": false, 00:21:29.523 "rdma_max_cq_size": 0, 00:21:29.523 "rdma_cm_event_timeout_ms": 0, 00:21:29.523 "dhchap_digests": [ 00:21:29.523 "sha256", 00:21:29.523 "sha384", 00:21:29.523 "sha512" 00:21:29.523 ], 00:21:29.523 "dhchap_dhgroups": [ 00:21:29.523 "null", 00:21:29.523 "ffdhe2048", 00:21:29.523 "ffdhe3072", 00:21:29.523 "ffdhe4096", 00:21:29.523 "ffdhe6144", 00:21:29.523 "ffdhe8192" 00:21:29.523 ] 00:21:29.523 } 00:21:29.523 }, 00:21:29.523 { 00:21:29.523 "method": "bdev_nvme_attach_controller", 00:21:29.523 "params": { 00:21:29.523 "name": "nvme0", 00:21:29.523 "trtype": "TCP", 00:21:29.523 "adrfam": "IPv4", 00:21:29.523 "traddr": "10.0.0.2", 00:21:29.523 "trsvcid": "4420", 00:21:29.523 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.523 "prchk_reftag": false, 00:21:29.523 "prchk_guard": false, 00:21:29.523 "ctrlr_loss_timeout_sec": 0, 00:21:29.523 "reconnect_delay_sec": 0, 00:21:29.523 "fast_io_fail_timeout_sec": 0, 00:21:29.523 "psk": "key0", 00:21:29.523 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:29.523 "hdgst": false, 00:21:29.523 "ddgst": false, 00:21:29.523 "multipath": "multipath" 00:21:29.523 } 00:21:29.523 }, 00:21:29.523 { 00:21:29.523 "method": "bdev_nvme_set_hotplug", 00:21:29.523 "params": { 00:21:29.523 "period_us": 100000, 00:21:29.523 "enable": false 00:21:29.523 } 00:21:29.523 }, 00:21:29.523 { 00:21:29.523 "method": "bdev_enable_histogram", 00:21:29.523 "params": { 00:21:29.523 "name": "nvme0n1", 00:21:29.523 "enable": true 00:21:29.523 } 00:21:29.523 }, 00:21:29.523 { 00:21:29.523 "method": "bdev_wait_for_examine" 00:21:29.523 } 00:21:29.523 ] 00:21:29.523 }, 00:21:29.523 { 00:21:29.523 "subsystem": "nbd", 00:21:29.523 "config": [] 00:21:29.523 } 00:21:29.523 ] 00:21:29.523 }' 00:21:29.523 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1307605 00:21:29.523 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1307605 ']' 00:21:29.523 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1307605 00:21:29.523 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:29.523 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.523 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1307605 00:21:29.523 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:29.523 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:29.523 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1307605' 00:21:29.523 killing process with pid 1307605 00:21:29.523 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1307605 00:21:29.523 Received shutdown signal, test time was about 1.000000 seconds 00:21:29.523 00:21:29.523 Latency(us) 00:21:29.523 [2024-11-20T15:16:05.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.523 [2024-11-20T15:16:05.459Z] =================================================================================================================== 00:21:29.523 [2024-11-20T15:16:05.459Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.523 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1307605 00:21:29.784 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1307488 00:21:29.784 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1307488 ']' 00:21:29.784 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1307488 00:21:29.784 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:29.784 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.784 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1307488 00:21:29.784 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:29.784 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:29.784 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1307488' 00:21:29.784 killing process with pid 1307488 00:21:29.784 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1307488 00:21:29.784 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1307488 00:21:29.784 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:29.784 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:29.784 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:29.784 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:29.784 "subsystems": [ 00:21:29.784 { 00:21:29.784 "subsystem": "keyring", 00:21:29.784 "config": [ 00:21:29.784 { 00:21:29.784 "method": "keyring_file_add_key", 00:21:29.784 "params": { 00:21:29.784 "name": "key0", 00:21:29.784 "path": "/tmp/tmp.lD8nt2WzYN" 00:21:29.784 } 00:21:29.784 } 00:21:29.784 ] 00:21:29.784 }, 00:21:29.784 { 00:21:29.784 "subsystem": "iobuf", 00:21:29.784 "config": [ 00:21:29.784 { 00:21:29.784 "method": "iobuf_set_options", 00:21:29.784 "params": { 00:21:29.784 "small_pool_count": 8192, 00:21:29.784 "large_pool_count": 1024, 00:21:29.784 "small_bufsize": 8192, 00:21:29.784 "large_bufsize": 135168, 00:21:29.784 "enable_numa": false 00:21:29.784 } 00:21:29.784 } 00:21:29.784 ] 00:21:29.784 }, 00:21:29.784 { 00:21:29.784 "subsystem": "sock", 00:21:29.784 "config": [ 00:21:29.784 { 00:21:29.784 "method": "sock_set_default_impl", 00:21:29.784 "params": { 00:21:29.784 "impl_name": "posix" 00:21:29.784 } 00:21:29.784 }, 00:21:29.784 { 00:21:29.784 "method": "sock_impl_set_options", 00:21:29.784 "params": { 00:21:29.784 "impl_name": "ssl", 00:21:29.784 "recv_buf_size": 4096, 00:21:29.784 "send_buf_size": 4096, 00:21:29.784 "enable_recv_pipe": true, 00:21:29.784 "enable_quickack": false, 00:21:29.784 "enable_placement_id": 0, 00:21:29.784 "enable_zerocopy_send_server": true, 00:21:29.784 "enable_zerocopy_send_client": false, 00:21:29.784 "zerocopy_threshold": 0, 00:21:29.784 "tls_version": 0, 00:21:29.784 "enable_ktls": false 00:21:29.784 } 00:21:29.784 }, 00:21:29.784 { 00:21:29.784 "method": "sock_impl_set_options", 00:21:29.784 "params": { 00:21:29.784 "impl_name": "posix", 00:21:29.784 "recv_buf_size": 2097152, 00:21:29.784 "send_buf_size": 2097152, 00:21:29.784 "enable_recv_pipe": true, 00:21:29.784 "enable_quickack": false, 00:21:29.784 "enable_placement_id": 0, 00:21:29.784 "enable_zerocopy_send_server": true, 00:21:29.784 "enable_zerocopy_send_client": false, 00:21:29.784 "zerocopy_threshold": 0, 00:21:29.784 "tls_version": 0, 00:21:29.784 "enable_ktls": false 00:21:29.784 } 00:21:29.784 } 00:21:29.784 ] 00:21:29.784 }, 00:21:29.784 { 00:21:29.784 "subsystem": "vmd", 00:21:29.784 "config": [] 00:21:29.784 }, 00:21:29.784 { 00:21:29.784 "subsystem": "accel", 00:21:29.784 "config": [ 00:21:29.784 { 00:21:29.784 "method": "accel_set_options", 00:21:29.784 "params": { 00:21:29.784 "small_cache_size": 128, 00:21:29.784 "large_cache_size": 16, 00:21:29.784 "task_count": 2048, 00:21:29.784 "sequence_count": 2048, 00:21:29.784 "buf_count": 2048 00:21:29.785 } 00:21:29.785 } 00:21:29.785 ] 00:21:29.785 }, 00:21:29.785 { 00:21:29.785 "subsystem": "bdev", 00:21:29.785 "config": [ 00:21:29.785 { 00:21:29.785 "method": "bdev_set_options", 00:21:29.785 "params": { 00:21:29.785 "bdev_io_pool_size": 65535, 00:21:29.785 "bdev_io_cache_size": 256, 00:21:29.785 "bdev_auto_examine": true, 00:21:29.785 "iobuf_small_cache_size": 128, 00:21:29.785 "iobuf_large_cache_size": 16 00:21:29.785 } 00:21:29.785 }, 00:21:29.785 { 00:21:29.785 "method": "bdev_raid_set_options", 00:21:29.785 "params": { 00:21:29.785 "process_window_size_kb": 1024, 00:21:29.785 "process_max_bandwidth_mb_sec": 0 00:21:29.785 } 00:21:29.785 }, 00:21:29.785 { 00:21:29.785 "method": "bdev_iscsi_set_options", 00:21:29.785 "params": { 00:21:29.785 "timeout_sec": 30 00:21:29.785 } 00:21:29.785 }, 00:21:29.785 { 00:21:29.785 "method": "bdev_nvme_set_options", 00:21:29.785 "params": { 00:21:29.785 "action_on_timeout": "none", 00:21:29.785 "timeout_us": 0, 00:21:29.785 "timeout_admin_us": 0, 00:21:29.785 "keep_alive_timeout_ms": 10000, 00:21:29.785 "arbitration_burst": 0, 00:21:29.785 "low_priority_weight": 0, 00:21:29.785 "medium_priority_weight": 0, 00:21:29.785 "high_priority_weight": 0, 00:21:29.785 "nvme_adminq_poll_period_us": 10000, 00:21:29.785 "nvme_ioq_poll_period_us": 0, 00:21:29.785 "io_queue_requests": 0, 00:21:29.785 "delay_cmd_submit": true, 00:21:29.785 "transport_retry_count": 4, 00:21:29.785 "bdev_retry_count": 3, 00:21:29.785 "transport_ack_timeout": 0, 00:21:29.785 "ctrlr_loss_timeout_sec": 0, 00:21:29.785 "reconnect_delay_sec": 0, 00:21:29.785 "fast_io_fail_timeout_sec": 0, 00:21:29.785 "disable_auto_failback": false, 00:21:29.785 "generate_uuids": false, 00:21:29.785 "transport_tos": 0, 00:21:29.785 "nvme_error_stat": false, 00:21:29.785 "rdma_srq_size": 0, 00:21:29.785 "io_path_stat": false, 00:21:29.785 "allow_accel_sequence": false, 00:21:29.785 "rdma_max_cq_size": 0, 00:21:29.785 "rdma_cm_event_timeout_ms": 0, 00:21:29.785 "dhchap_digests": [ 00:21:29.785 "sha256", 00:21:29.785 "sha384", 00:21:29.785 "sha512" 00:21:29.785 ], 00:21:29.785 "dhchap_dhgroups": [ 00:21:29.785 "null", 00:21:29.785 "ffdhe2048", 00:21:29.785 "ffdhe3072", 00:21:29.785 "ffdhe4096", 00:21:29.785 "ffdhe6144", 00:21:29.785 "ffdhe8192" 00:21:29.785 ] 00:21:29.785 } 00:21:29.785 }, 00:21:29.785 { 00:21:29.785 "method": "bdev_nvme_set_hotplug", 00:21:29.785 "params": { 00:21:29.785 "period_us": 100000, 00:21:29.785 "enable": false 00:21:29.785 } 00:21:29.785 }, 00:21:29.785 { 00:21:29.785 "method": "bdev_malloc_create", 00:21:29.785 "params": { 00:21:29.785 "name": "malloc0", 00:21:29.785 "num_blocks": 8192, 00:21:29.785 "block_size": 4096, 00:21:29.785 "physical_block_size": 4096, 00:21:29.785 "uuid": "cf4d3106-74ac-4756-8759-adea20c4cb83", 00:21:29.785 "optimal_io_boundary": 0, 00:21:29.785 "md_size": 0, 00:21:29.785 "dif_type": 0, 00:21:29.785 "dif_is_head_of_md": false, 00:21:29.785 "dif_pi_format": 0 00:21:29.785 } 00:21:29.785 }, 00:21:29.785 { 00:21:29.785 "method": "bdev_wait_for_examine" 00:21:29.785 } 00:21:29.785 ] 00:21:29.785 }, 00:21:29.785 { 00:21:29.785 "subsystem": "nbd", 00:21:29.785 "config": [] 00:21:29.785 }, 00:21:29.785 { 00:21:29.785 "subsystem": "scheduler", 00:21:29.785 "config": [ 00:21:29.785 { 00:21:29.785 "method": "framework_set_scheduler", 00:21:29.785 "params": { 00:21:29.785 "name": "static" 00:21:29.785 } 00:21:29.785 } 00:21:29.785 ] 00:21:29.785 }, 00:21:29.785 { 00:21:29.785 "subsystem": "nvmf", 00:21:29.785 "config": [ 00:21:29.785 { 00:21:29.785 "method": "nvmf_set_config", 00:21:29.785 "params": { 00:21:29.785 "discovery_filter": "match_any", 00:21:29.785 "admin_cmd_passthru": { 00:21:29.785 "identify_ctrlr": false 00:21:29.785 }, 00:21:29.785 "dhchap_digests": [ 00:21:29.785 "sha256", 00:21:29.785 "sha384", 00:21:29.785 "sha512" 00:21:29.785 ], 00:21:29.785 "dhchap_dhgroups": [ 00:21:29.785 "null", 00:21:29.785 "ffdhe2048", 00:21:29.785 "ffdhe3072", 00:21:29.785 "ffdhe4096", 00:21:29.785 "ffdhe6144", 00:21:29.785 "ffdhe8192" 00:21:29.785 ] 00:21:29.785 } 00:21:29.785 }, 00:21:29.785 { 00:21:29.785 "method": "nvmf_set_max_subsystems", 00:21:29.785 "params": { 00:21:29.785 "max_subsystems": 1024 00:21:29.785 } 00:21:29.785 }, 00:21:29.785 { 00:21:29.785 "method": "nvmf_set_crdt", 00:21:29.785 "params": { 00:21:29.785 "crdt1": 0, 00:21:29.785 "crdt2": 0, 00:21:29.785 "crdt3": 0 00:21:29.785 } 00:21:29.785 }, 00:21:29.785 { 00:21:29.785 "method": "nvmf_create_transport", 00:21:29.785 "params": { 00:21:29.785 "trtype": "TCP", 00:21:29.785 "max_queue_depth": 128, 00:21:29.785 "max_io_qpairs_per_ctrlr": 127, 00:21:29.785 "in_capsule_data_size": 4096, 00:21:29.785 "max_io_size": 131072, 00:21:29.785 "io_unit_size": 131072, 00:21:29.785 "max_aq_depth": 128, 00:21:29.785 "num_shared_buffers": 511, 00:21:29.785 "buf_cache_size": 4294967295, 00:21:29.785 "dif_insert_or_strip": false, 00:21:29.785 "zcopy": false, 00:21:29.785 "c2h_success": false, 00:21:29.785 "sock_priority": 0, 00:21:29.785 "abort_timeout_sec": 1, 00:21:29.785 "ack_timeout": 0, 00:21:29.785 "data_wr_pool_size": 0 00:21:29.785 } 00:21:29.785 }, 00:21:29.785 { 00:21:29.785 "method": "nvmf_create_subsystem", 00:21:29.785 "params": { 00:21:29.785 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.785 "allow_any_host": false, 00:21:29.785 "serial_number": "00000000000000000000", 00:21:29.785 "model_number": "SPDK bdev Controller", 00:21:29.785 "max_namespaces": 32, 00:21:29.785 "min_cntlid": 1, 00:21:29.785 "max_cntlid": 65519, 00:21:29.785 "ana_reporting": false 00:21:29.785 } 00:21:29.785 }, 00:21:29.785 { 00:21:29.785 "method": "nvmf_subsystem_add_host", 00:21:29.785 "params": { 00:21:29.785 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.785 "host": "nqn.2016-06.io.spdk:host1", 00:21:29.785 "psk": "key0" 00:21:29.785 } 00:21:29.785 }, 00:21:29.785 { 00:21:29.785 "method": "nvmf_subsystem_add_ns", 00:21:29.785 "params": { 00:21:29.785 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.785 "namespace": { 00:21:29.785 "nsid": 1, 00:21:29.785 "bdev_name": "malloc0", 00:21:29.785 "nguid": "CF4D310674AC47568759ADEA20C4CB83", 00:21:29.785 "uuid": "cf4d3106-74ac-4756-8759-adea20c4cb83", 00:21:29.785 "no_auto_visible": false 00:21:29.785 } 00:21:29.785 } 00:21:29.785 }, 00:21:29.785 { 00:21:29.785 "method": "nvmf_subsystem_add_listener", 00:21:29.785 "params": { 00:21:29.785 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.785 "listen_address": { 00:21:29.785 "trtype": "TCP", 00:21:29.785 "adrfam": "IPv4", 00:21:29.785 "traddr": "10.0.0.2", 00:21:29.785 "trsvcid": "4420" 00:21:29.785 }, 00:21:29.785 "secure_channel": false, 00:21:29.785 "sock_impl": "ssl" 00:21:29.785 } 00:21:29.785 } 00:21:29.785 ] 00:21:29.785 } 00:21:29.785 ] 00:21:29.785 }' 00:21:29.785 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.785 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1308206 00:21:29.785 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1308206 00:21:29.785 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:29.785 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1308206 ']' 00:21:29.785 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.785 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:29.785 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.785 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:29.785 16:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.045 [2024-11-20 16:16:05.726763] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:21:30.045 [2024-11-20 16:16:05.726819] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.045 [2024-11-20 16:16:05.818200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.045 [2024-11-20 16:16:05.847353] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.045 [2024-11-20 16:16:05.847383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.046 [2024-11-20 16:16:05.847388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.046 [2024-11-20 16:16:05.847393] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.046 [2024-11-20 16:16:05.847397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.046 [2024-11-20 16:16:05.847846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.305 [2024-11-20 16:16:06.040895] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.305 [2024-11-20 16:16:06.072924] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:30.305 [2024-11-20 16:16:06.073120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.876 16:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.876 16:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:30.876 16:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:30.876 16:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:30.876 16:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.876 16:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.876 16:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1308556 00:21:30.876 16:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1308556 /var/tmp/bdevperf.sock 00:21:30.876 16:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1308556 ']' 00:21:30.876 16:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.876 16:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.876 16:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.876 16:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:30.876 16:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.876 16:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.876 16:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:30.876 "subsystems": [ 00:21:30.876 { 00:21:30.876 "subsystem": "keyring", 00:21:30.876 "config": [ 00:21:30.876 { 00:21:30.876 "method": "keyring_file_add_key", 00:21:30.876 "params": { 00:21:30.876 "name": "key0", 00:21:30.876 "path": "/tmp/tmp.lD8nt2WzYN" 00:21:30.876 } 00:21:30.876 } 00:21:30.876 ] 00:21:30.876 }, 00:21:30.876 { 00:21:30.876 "subsystem": "iobuf", 00:21:30.876 "config": [ 00:21:30.876 { 00:21:30.876 "method": "iobuf_set_options", 00:21:30.876 "params": { 00:21:30.876 "small_pool_count": 8192, 00:21:30.876 "large_pool_count": 1024, 00:21:30.876 "small_bufsize": 8192, 00:21:30.876 "large_bufsize": 135168, 00:21:30.876 "enable_numa": false 00:21:30.876 } 00:21:30.876 } 00:21:30.876 ] 00:21:30.876 }, 00:21:30.876 { 00:21:30.876 "subsystem": "sock", 00:21:30.876 "config": [ 00:21:30.876 { 00:21:30.876 "method": "sock_set_default_impl", 00:21:30.876 "params": { 00:21:30.876 "impl_name": "posix" 00:21:30.876 } 00:21:30.876 }, 00:21:30.876 { 00:21:30.876 "method": "sock_impl_set_options", 00:21:30.876 "params": { 00:21:30.876 "impl_name": "ssl", 00:21:30.876 "recv_buf_size": 4096, 00:21:30.876 "send_buf_size": 4096, 00:21:30.876 "enable_recv_pipe": true, 00:21:30.876 "enable_quickack": false, 00:21:30.876 "enable_placement_id": 0, 00:21:30.876 "enable_zerocopy_send_server": true, 00:21:30.876 "enable_zerocopy_send_client": false, 00:21:30.876 "zerocopy_threshold": 0, 00:21:30.876 "tls_version": 0, 00:21:30.876 "enable_ktls": false 00:21:30.876 } 00:21:30.876 }, 00:21:30.876 { 00:21:30.876 "method": "sock_impl_set_options", 00:21:30.876 "params": { 00:21:30.876 "impl_name": "posix", 00:21:30.876 "recv_buf_size": 2097152, 00:21:30.876 "send_buf_size": 2097152, 00:21:30.876 "enable_recv_pipe": true, 00:21:30.876 "enable_quickack": false, 00:21:30.876 "enable_placement_id": 0, 00:21:30.876 "enable_zerocopy_send_server": true, 00:21:30.876 "enable_zerocopy_send_client": false, 00:21:30.876 "zerocopy_threshold": 0, 00:21:30.876 "tls_version": 0, 00:21:30.876 "enable_ktls": false 00:21:30.876 } 00:21:30.876 } 00:21:30.876 ] 00:21:30.876 }, 00:21:30.876 { 00:21:30.876 "subsystem": "vmd", 00:21:30.876 "config": [] 00:21:30.876 }, 00:21:30.876 { 00:21:30.876 "subsystem": "accel", 00:21:30.876 "config": [ 00:21:30.876 { 00:21:30.876 "method": "accel_set_options", 00:21:30.876 "params": { 00:21:30.876 "small_cache_size": 128, 00:21:30.876 "large_cache_size": 16, 00:21:30.876 "task_count": 2048, 00:21:30.876 "sequence_count": 2048, 00:21:30.876 "buf_count": 2048 00:21:30.876 } 00:21:30.876 } 00:21:30.876 ] 00:21:30.876 }, 00:21:30.876 { 00:21:30.876 "subsystem": "bdev", 00:21:30.876 "config": [ 00:21:30.876 { 00:21:30.876 "method": "bdev_set_options", 00:21:30.876 "params": { 00:21:30.876 "bdev_io_pool_size": 65535, 00:21:30.876 "bdev_io_cache_size": 256, 00:21:30.876 "bdev_auto_examine": true, 00:21:30.876 "iobuf_small_cache_size": 128, 00:21:30.876 "iobuf_large_cache_size": 16 00:21:30.876 } 00:21:30.876 }, 00:21:30.876 { 00:21:30.876 "method": "bdev_raid_set_options", 00:21:30.876 "params": { 00:21:30.876 "process_window_size_kb": 1024, 00:21:30.876 "process_max_bandwidth_mb_sec": 0 00:21:30.876 } 00:21:30.876 }, 00:21:30.876 { 00:21:30.876 "method": "bdev_iscsi_set_options", 00:21:30.876 "params": { 00:21:30.876 "timeout_sec": 30 00:21:30.876 } 00:21:30.876 }, 00:21:30.876 { 00:21:30.876 "method": "bdev_nvme_set_options", 00:21:30.876 "params": { 00:21:30.876 "action_on_timeout": "none", 00:21:30.876 "timeout_us": 0, 00:21:30.876 "timeout_admin_us": 0, 00:21:30.876 "keep_alive_timeout_ms": 10000, 00:21:30.876 "arbitration_burst": 0, 00:21:30.876 "low_priority_weight": 0, 00:21:30.876 "medium_priority_weight": 0, 00:21:30.876 "high_priority_weight": 0, 00:21:30.876 "nvme_adminq_poll_period_us": 10000, 00:21:30.876 "nvme_ioq_poll_period_us": 0, 00:21:30.876 "io_queue_requests": 512, 00:21:30.876 "delay_cmd_submit": true, 00:21:30.876 "transport_retry_count": 4, 00:21:30.876 "bdev_retry_count": 3, 00:21:30.876 "transport_ack_timeout": 0, 00:21:30.876 "ctrlr_loss_timeout_sec": 0, 00:21:30.876 "reconnect_delay_sec": 0, 00:21:30.876 "fast_io_fail_timeout_sec": 0, 00:21:30.876 "disable_auto_failback": false, 00:21:30.877 "generate_uuids": false, 00:21:30.877 "transport_tos": 0, 00:21:30.877 "nvme_error_stat": false, 00:21:30.877 "rdma_srq_size": 0, 00:21:30.877 "io_path_stat": false, 00:21:30.877 "allow_accel_sequence": false, 00:21:30.877 "rdma_max_cq_size": 0, 00:21:30.877 "rdma_cm_event_timeout_ms": 0, 00:21:30.877 "dhchap_digests": [ 00:21:30.877 "sha256", 00:21:30.877 "sha384", 00:21:30.877 "sha512" 00:21:30.877 ], 00:21:30.877 "dhchap_dhgroups": [ 00:21:30.877 "null", 00:21:30.877 "ffdhe2048", 00:21:30.877 "ffdhe3072", 00:21:30.877 "ffdhe4096", 00:21:30.877 "ffdhe6144", 00:21:30.877 "ffdhe8192" 00:21:30.877 ] 00:21:30.877 } 00:21:30.877 }, 00:21:30.877 { 00:21:30.877 "method": "bdev_nvme_attach_controller", 00:21:30.877 "params": { 00:21:30.877 "name": "nvme0", 00:21:30.877 "trtype": "TCP", 00:21:30.877 "adrfam": "IPv4", 00:21:30.877 "traddr": "10.0.0.2", 00:21:30.877 "trsvcid": "4420", 00:21:30.877 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.877 "prchk_reftag": false, 00:21:30.877 "prchk_guard": false, 00:21:30.877 "ctrlr_loss_timeout_sec": 0, 00:21:30.877 "reconnect_delay_sec": 0, 00:21:30.877 "fast_io_fail_timeout_sec": 0, 00:21:30.877 "psk": "key0", 00:21:30.877 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:30.877 "hdgst": false, 00:21:30.877 "ddgst": false, 00:21:30.877 "multipath": "multipath" 00:21:30.877 } 00:21:30.877 }, 00:21:30.877 { 00:21:30.877 "method": "bdev_nvme_set_hotplug", 00:21:30.877 "params": { 00:21:30.877 "period_us": 100000, 00:21:30.877 "enable": false 00:21:30.877 } 00:21:30.877 }, 00:21:30.877 { 00:21:30.877 "method": "bdev_enable_histogram", 00:21:30.877 "params": { 00:21:30.877 "name": "nvme0n1", 00:21:30.877 "enable": true 00:21:30.877 } 00:21:30.877 }, 00:21:30.877 { 00:21:30.877 "method": "bdev_wait_for_examine" 00:21:30.877 } 00:21:30.877 ] 00:21:30.877 }, 00:21:30.877 { 00:21:30.877 "subsystem": "nbd", 00:21:30.877 "config": [] 00:21:30.877 } 00:21:30.877 ] 00:21:30.877 }' 00:21:30.877 [2024-11-20 16:16:06.619793] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:21:30.877 [2024-11-20 16:16:06.619867] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308556 ] 00:21:30.877 [2024-11-20 16:16:06.703892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.877 [2024-11-20 16:16:06.733734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.137 [2024-11-20 16:16:06.868618] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:31.708 16:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.708 16:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:31.708 16:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:31.708 16:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:31.708 16:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.708 16:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:31.969 Running I/O for 1 seconds... 00:21:32.911 6009.00 IOPS, 23.47 MiB/s 00:21:32.911 Latency(us) 00:21:32.911 [2024-11-20T15:16:08.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.911 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:32.911 Verification LBA range: start 0x0 length 0x2000 00:21:32.911 nvme0n1 : 1.02 6002.58 23.45 0.00 0.00 21126.63 4587.52 29272.75 00:21:32.911 [2024-11-20T15:16:08.847Z] =================================================================================================================== 00:21:32.911 [2024-11-20T15:16:08.847Z] Total : 6002.58 23.45 0.00 0.00 21126.63 4587.52 29272.75 00:21:32.911 { 00:21:32.911 "results": [ 00:21:32.911 { 00:21:32.911 "job": "nvme0n1", 00:21:32.911 "core_mask": "0x2", 00:21:32.911 "workload": "verify", 00:21:32.911 "status": "finished", 00:21:32.911 "verify_range": { 00:21:32.911 "start": 0, 00:21:32.911 "length": 8192 00:21:32.911 }, 00:21:32.911 "queue_depth": 128, 00:21:32.911 "io_size": 4096, 00:21:32.911 "runtime": 1.022393, 00:21:32.911 "iops": 6002.584133498566, 00:21:32.911 "mibps": 23.447594271478774, 00:21:32.911 "io_failed": 0, 00:21:32.911 "io_timeout": 0, 00:21:32.911 "avg_latency_us": 21126.625604258323, 00:21:32.911 "min_latency_us": 4587.52, 00:21:32.911 "max_latency_us": 29272.746666666666 00:21:32.911 } 00:21:32.911 ], 00:21:32.911 "core_count": 1 00:21:32.911 } 00:21:32.911 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:32.911 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:32.911 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:32.911 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:21:32.911 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:21:32.911 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:32.911 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:32.911 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:32.911 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:32.911 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:32.911 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:32.911 nvmf_trace.0 00:21:32.911 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:21:32.911 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1308556 00:21:32.911 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1308556 ']' 00:21:32.911 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1308556 00:21:32.911 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:32.911 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:32.911 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1308556 00:21:33.172 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:33.172 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:33.172 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1308556' 00:21:33.172 killing process with pid 1308556 00:21:33.172 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1308556 00:21:33.172 Received shutdown signal, test time was about 1.000000 seconds 00:21:33.172 00:21:33.172 Latency(us) 00:21:33.172 [2024-11-20T15:16:09.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.172 [2024-11-20T15:16:09.108Z] =================================================================================================================== 00:21:33.172 [2024-11-20T15:16:09.108Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:33.172 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1308556 00:21:33.172 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:33.172 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:33.172 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:33.172 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:33.172 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:33.172 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:33.172 16:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:33.172 rmmod nvme_tcp 00:21:33.172 rmmod nvme_fabrics 00:21:33.172 rmmod nvme_keyring 00:21:33.172 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:33.172 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:33.172 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:33.172 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1308206 ']' 00:21:33.172 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1308206 00:21:33.172 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1308206 ']' 00:21:33.172 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1308206 00:21:33.172 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:33.172 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.172 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1308206 00:21:33.432 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:33.432 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:33.432 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1308206' 00:21:33.432 killing process with pid 1308206 00:21:33.432 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1308206 00:21:33.432 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1308206 00:21:33.432 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:33.432 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:33.432 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:33.432 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:33.432 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:33.432 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:33.432 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:33.432 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:33.432 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:33.432 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.432 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.432 16:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.cfqPetSoKL /tmp/tmp.OLmEZQjq8m /tmp/tmp.lD8nt2WzYN 00:21:35.978 00:21:35.978 real 1m28.105s 00:21:35.978 user 2m20.967s 00:21:35.978 sys 0m26.347s 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.978 ************************************ 00:21:35.978 END TEST nvmf_tls 00:21:35.978 ************************************ 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:35.978 ************************************ 00:21:35.978 START TEST nvmf_fips 00:21:35.978 ************************************ 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:35.978 * Looking for test storage... 00:21:35.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:35.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.978 --rc genhtml_branch_coverage=1 00:21:35.978 --rc genhtml_function_coverage=1 00:21:35.978 --rc genhtml_legend=1 00:21:35.978 --rc geninfo_all_blocks=1 00:21:35.978 --rc geninfo_unexecuted_blocks=1 00:21:35.978 00:21:35.978 ' 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:35.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.978 --rc genhtml_branch_coverage=1 00:21:35.978 --rc genhtml_function_coverage=1 00:21:35.978 --rc genhtml_legend=1 00:21:35.978 --rc geninfo_all_blocks=1 00:21:35.978 --rc geninfo_unexecuted_blocks=1 00:21:35.978 00:21:35.978 ' 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:35.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.978 --rc genhtml_branch_coverage=1 00:21:35.978 --rc genhtml_function_coverage=1 00:21:35.978 --rc genhtml_legend=1 00:21:35.978 --rc geninfo_all_blocks=1 00:21:35.978 --rc geninfo_unexecuted_blocks=1 00:21:35.978 00:21:35.978 ' 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:35.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.978 --rc genhtml_branch_coverage=1 00:21:35.978 --rc genhtml_function_coverage=1 00:21:35.978 --rc genhtml_legend=1 00:21:35.978 --rc geninfo_all_blocks=1 00:21:35.978 --rc geninfo_unexecuted_blocks=1 00:21:35.978 00:21:35.978 ' 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.978 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:35.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:21:35.979 Error setting digest 00:21:35.979 40C28FD6D07F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:35.979 40C28FD6D07F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:35.979 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:35.980 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:35.980 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:35.980 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:35.980 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.980 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:35.980 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:35.980 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:35.980 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.980 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.980 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.980 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:35.980 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:35.980 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:35.980 16:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:44.122 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:44.122 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:44.122 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:44.123 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:44.123 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:44.123 16:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:44.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:21:44.123 00:21:44.123 --- 10.0.0.2 ping statistics --- 00:21:44.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.123 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:44.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:21:44.123 00:21:44.123 --- 10.0.0.1 ping statistics --- 00:21:44.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.123 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1313264 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1313264 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1313264 ']' 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.123 16:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:44.123 [2024-11-20 16:16:19.346745] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:21:44.123 [2024-11-20 16:16:19.346818] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.123 [2024-11-20 16:16:19.447726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.123 [2024-11-20 16:16:19.497344] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.123 [2024-11-20 16:16:19.497395] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.123 [2024-11-20 16:16:19.497404] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.123 [2024-11-20 16:16:19.497412] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.123 [2024-11-20 16:16:19.497418] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.123 [2024-11-20 16:16:19.498173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.385 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.385 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:44.385 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:44.385 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:44.385 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:44.385 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.385 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:44.385 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:44.385 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:44.385 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.gHD 00:21:44.385 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:44.385 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.gHD 00:21:44.385 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.gHD 00:21:44.385 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.gHD 00:21:44.385 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:44.647 [2024-11-20 16:16:20.367962] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.647 [2024-11-20 16:16:20.383953] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:44.647 [2024-11-20 16:16:20.384282] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.647 malloc0 00:21:44.647 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:44.647 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1313430 00:21:44.647 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1313430 /var/tmp/bdevperf.sock 00:21:44.647 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:44.647 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1313430 ']' 00:21:44.647 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.647 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.647 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.647 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.647 16:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:44.647 [2024-11-20 16:16:20.529209] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:21:44.647 [2024-11-20 16:16:20.529291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1313430 ] 00:21:44.909 [2024-11-20 16:16:20.625126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.909 [2024-11-20 16:16:20.676026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.481 16:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.481 16:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:45.481 16:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.gHD 00:21:45.742 16:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:46.003 [2024-11-20 16:16:21.717993] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:46.003 TLSTESTn1 00:21:46.003 16:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:46.003 Running I/O for 10 seconds... 00:21:48.331 3428.00 IOPS, 13.39 MiB/s [2024-11-20T15:16:25.209Z] 4372.50 IOPS, 17.08 MiB/s [2024-11-20T15:16:26.154Z] 4735.00 IOPS, 18.50 MiB/s [2024-11-20T15:16:27.096Z] 4999.75 IOPS, 19.53 MiB/s [2024-11-20T15:16:28.039Z] 4952.80 IOPS, 19.35 MiB/s [2024-11-20T15:16:28.982Z] 5127.50 IOPS, 20.03 MiB/s [2024-11-20T15:16:30.367Z] 5263.71 IOPS, 20.56 MiB/s [2024-11-20T15:16:30.939Z] 5280.75 IOPS, 20.63 MiB/s [2024-11-20T15:16:32.349Z] 5191.67 IOPS, 20.28 MiB/s [2024-11-20T15:16:32.349Z] 5261.20 IOPS, 20.55 MiB/s 00:21:56.413 Latency(us) 00:21:56.413 [2024-11-20T15:16:32.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.413 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:56.413 Verification LBA range: start 0x0 length 0x2000 00:21:56.413 TLSTESTn1 : 10.02 5265.30 20.57 0.00 0.00 24272.48 6471.68 37137.07 00:21:56.413 [2024-11-20T15:16:32.349Z] =================================================================================================================== 00:21:56.413 [2024-11-20T15:16:32.349Z] Total : 5265.30 20.57 0.00 0.00 24272.48 6471.68 37137.07 00:21:56.413 { 00:21:56.413 "results": [ 00:21:56.413 { 00:21:56.413 "job": "TLSTESTn1", 00:21:56.413 "core_mask": "0x4", 00:21:56.413 "workload": "verify", 00:21:56.413 "status": "finished", 00:21:56.413 "verify_range": { 00:21:56.413 "start": 0, 00:21:56.413 "length": 8192 00:21:56.413 }, 00:21:56.413 "queue_depth": 128, 00:21:56.414 "io_size": 4096, 00:21:56.414 "runtime": 10.016151, 00:21:56.414 "iops": 5265.296020397456, 00:21:56.414 "mibps": 20.567562579677563, 00:21:56.414 "io_failed": 0, 00:21:56.414 "io_timeout": 0, 00:21:56.414 "avg_latency_us": 24272.477012653748, 00:21:56.414 "min_latency_us": 6471.68, 00:21:56.414 "max_latency_us": 37137.066666666666 00:21:56.414 } 00:21:56.414 ], 00:21:56.414 "core_count": 1 00:21:56.414 } 00:21:56.414 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:56.414 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:56.414 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:56.414 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:56.414 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:56.414 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:56.414 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:56.414 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:56.414 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:56.414 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:56.414 nvmf_trace.0 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1313430 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1313430 ']' 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1313430 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1313430 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1313430' 00:21:56.414 killing process with pid 1313430 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1313430 00:21:56.414 Received shutdown signal, test time was about 10.000000 seconds 00:21:56.414 00:21:56.414 Latency(us) 00:21:56.414 [2024-11-20T15:16:32.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.414 [2024-11-20T15:16:32.350Z] =================================================================================================================== 00:21:56.414 [2024-11-20T15:16:32.350Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1313430 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:56.414 rmmod nvme_tcp 00:21:56.414 rmmod nvme_fabrics 00:21:56.414 rmmod nvme_keyring 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1313264 ']' 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1313264 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1313264 ']' 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1313264 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.414 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1313264 00:21:56.675 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:56.675 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:56.675 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1313264' 00:21:56.675 killing process with pid 1313264 00:21:56.675 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1313264 00:21:56.675 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1313264 00:21:56.675 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:56.675 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:56.675 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:56.675 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:56.675 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:56.675 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:56.675 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:56.675 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:56.675 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:56.675 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.675 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:56.675 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.gHD 00:21:59.224 00:21:59.224 real 0m23.193s 00:21:59.224 user 0m24.943s 00:21:59.224 sys 0m9.653s 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:59.224 ************************************ 00:21:59.224 END TEST nvmf_fips 00:21:59.224 ************************************ 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:59.224 ************************************ 00:21:59.224 START TEST nvmf_control_msg_list 00:21:59.224 ************************************ 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:59.224 * Looking for test storage... 00:21:59.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:59.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.224 --rc genhtml_branch_coverage=1 00:21:59.224 --rc genhtml_function_coverage=1 00:21:59.224 --rc genhtml_legend=1 00:21:59.224 --rc geninfo_all_blocks=1 00:21:59.224 --rc geninfo_unexecuted_blocks=1 00:21:59.224 00:21:59.224 ' 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:59.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.224 --rc genhtml_branch_coverage=1 00:21:59.224 --rc genhtml_function_coverage=1 00:21:59.224 --rc genhtml_legend=1 00:21:59.224 --rc geninfo_all_blocks=1 00:21:59.224 --rc geninfo_unexecuted_blocks=1 00:21:59.224 00:21:59.224 ' 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:59.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.224 --rc genhtml_branch_coverage=1 00:21:59.224 --rc genhtml_function_coverage=1 00:21:59.224 --rc genhtml_legend=1 00:21:59.224 --rc geninfo_all_blocks=1 00:21:59.224 --rc geninfo_unexecuted_blocks=1 00:21:59.224 00:21:59.224 ' 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:59.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.224 --rc genhtml_branch_coverage=1 00:21:59.224 --rc genhtml_function_coverage=1 00:21:59.224 --rc genhtml_legend=1 00:21:59.224 --rc geninfo_all_blocks=1 00:21:59.224 --rc geninfo_unexecuted_blocks=1 00:21:59.224 00:21:59.224 ' 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.224 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:59.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:59.225 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:07.371 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:07.371 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:07.371 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:07.372 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:07.372 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:07.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:07.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:22:07.372 00:22:07.372 --- 10.0.0.2 ping statistics --- 00:22:07.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.372 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:07.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:07.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:22:07.372 00:22:07.372 --- 10.0.0.1 ping statistics --- 00:22:07.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.372 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1319970 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1319970 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1319970 ']' 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:07.372 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:07.372 [2024-11-20 16:16:42.442428] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:22:07.372 [2024-11-20 16:16:42.442504] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.372 [2024-11-20 16:16:42.526209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.372 [2024-11-20 16:16:42.577720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.372 [2024-11-20 16:16:42.577774] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.372 [2024-11-20 16:16:42.577783] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.372 [2024-11-20 16:16:42.577791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.372 [2024-11-20 16:16:42.577798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.372 [2024-11-20 16:16:42.578559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.372 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:07.372 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:22:07.372 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:07.372 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:07.372 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:07.372 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:07.634 [2024-11-20 16:16:43.312546] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:07.634 Malloc0 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:07.634 [2024-11-20 16:16:43.367006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1320015 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1320016 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1320017 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1320015 00:22:07.634 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:07.635 [2024-11-20 16:16:43.477924] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:07.635 [2024-11-20 16:16:43.478151] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:07.635 [2024-11-20 16:16:43.478484] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:09.020 Initializing NVMe Controllers 00:22:09.020 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:09.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:22:09.020 Initialization complete. Launching workers. 00:22:09.020 ======================================================== 00:22:09.020 Latency(us) 00:22:09.020 Device Information : IOPS MiB/s Average min max 00:22:09.020 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40918.67 40707.02 41279.77 00:22:09.020 ======================================================== 00:22:09.020 Total : 25.00 0.10 40918.67 40707.02 41279.77 00:22:09.020 00:22:09.020 Initializing NVMe Controllers 00:22:09.020 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:09.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:22:09.020 Initialization complete. Launching workers. 00:22:09.020 ======================================================== 00:22:09.020 Latency(us) 00:22:09.020 Device Information : IOPS MiB/s Average min max 00:22:09.020 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2870.00 11.21 348.19 151.28 633.16 00:22:09.020 ======================================================== 00:22:09.020 Total : 2870.00 11.21 348.19 151.28 633.16 00:22:09.020 00:22:09.020 Initializing NVMe Controllers 00:22:09.020 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:09.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:22:09.020 Initialization complete. Launching workers. 00:22:09.020 ======================================================== 00:22:09.020 Latency(us) 00:22:09.020 Device Information : IOPS MiB/s Average min max 00:22:09.020 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1453.00 5.68 688.23 253.20 897.69 00:22:09.020 ======================================================== 00:22:09.020 Total : 1453.00 5.68 688.23 253.20 897.69 00:22:09.020 00:22:09.020 [2024-11-20 16:16:44.671963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2028ce0 is same with the state(6) to be set 00:22:09.020 [2024-11-20 16:16:44.672011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2028ce0 is same with the state(6) to be set 00:22:09.020 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1320016 00:22:09.020 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1320017 00:22:09.020 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:09.020 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:22:09.020 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:09.020 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:22:09.020 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:09.021 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:22:09.021 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:09.021 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:09.021 rmmod nvme_tcp 00:22:09.021 rmmod nvme_fabrics 00:22:09.021 rmmod nvme_keyring 00:22:09.021 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:09.021 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:22:09.021 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:22:09.021 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1319970 ']' 00:22:09.021 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1319970 00:22:09.021 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1319970 ']' 00:22:09.021 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1319970 00:22:09.021 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:22:09.021 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.021 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1319970 00:22:09.021 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:09.021 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:09.021 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1319970' 00:22:09.021 killing process with pid 1319970 00:22:09.021 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1319970 00:22:09.021 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1319970 00:22:09.281 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:09.281 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:09.281 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:09.281 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:22:09.281 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:22:09.281 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:09.281 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:22:09.281 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:09.281 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:09.281 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.281 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.281 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.195 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:11.195 00:22:11.195 real 0m12.417s 00:22:11.195 user 0m8.001s 00:22:11.195 sys 0m6.755s 00:22:11.195 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:11.195 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:11.195 ************************************ 00:22:11.195 END TEST nvmf_control_msg_list 00:22:11.195 ************************************ 00:22:11.195 16:16:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:11.456 ************************************ 00:22:11.456 START TEST nvmf_wait_for_buf 00:22:11.456 ************************************ 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:11.456 * Looking for test storage... 00:22:11.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:11.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.456 --rc genhtml_branch_coverage=1 00:22:11.456 --rc genhtml_function_coverage=1 00:22:11.456 --rc genhtml_legend=1 00:22:11.456 --rc geninfo_all_blocks=1 00:22:11.456 --rc geninfo_unexecuted_blocks=1 00:22:11.456 00:22:11.456 ' 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:11.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.456 --rc genhtml_branch_coverage=1 00:22:11.456 --rc genhtml_function_coverage=1 00:22:11.456 --rc genhtml_legend=1 00:22:11.456 --rc geninfo_all_blocks=1 00:22:11.456 --rc geninfo_unexecuted_blocks=1 00:22:11.456 00:22:11.456 ' 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:11.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.456 --rc genhtml_branch_coverage=1 00:22:11.456 --rc genhtml_function_coverage=1 00:22:11.456 --rc genhtml_legend=1 00:22:11.456 --rc geninfo_all_blocks=1 00:22:11.456 --rc geninfo_unexecuted_blocks=1 00:22:11.456 00:22:11.456 ' 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:11.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.456 --rc genhtml_branch_coverage=1 00:22:11.456 --rc genhtml_function_coverage=1 00:22:11.456 --rc genhtml_legend=1 00:22:11.456 --rc geninfo_all_blocks=1 00:22:11.456 --rc geninfo_unexecuted_blocks=1 00:22:11.456 00:22:11.456 ' 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.456 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:11.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:11.718 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:19.864 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:19.864 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:19.864 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:19.864 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:19.865 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:19.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:22:19.865 00:22:19.865 --- 10.0.0.2 ping statistics --- 00:22:19.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.865 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:22:19.865 00:22:19.865 --- 10.0.0.1 ping statistics --- 00:22:19.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.865 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1324640 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1324640 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1324640 ']' 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.865 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:19.865 [2024-11-20 16:16:55.013050] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:22:19.865 [2024-11-20 16:16:55.013114] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.865 [2024-11-20 16:16:55.111676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.865 [2024-11-20 16:16:55.161944] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.865 [2024-11-20 16:16:55.161995] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.865 [2024-11-20 16:16:55.162004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.865 [2024-11-20 16:16:55.162011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.865 [2024-11-20 16:16:55.162017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.865 [2024-11-20 16:16:55.162774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:20.128 Malloc0 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:20.128 [2024-11-20 16:16:55.979115] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.128 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:20.128 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.128 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:20.128 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.129 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:20.129 [2024-11-20 16:16:56.015462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.129 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.129 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:20.390 [2024-11-20 16:16:56.121265] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:21.779 Initializing NVMe Controllers 00:22:21.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:21.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:21.779 Initialization complete. Launching workers. 00:22:21.779 ======================================================== 00:22:21.779 Latency(us) 00:22:21.779 Device Information : IOPS MiB/s Average min max 00:22:21.779 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32263.61 8009.77 63851.83 00:22:21.779 ======================================================== 00:22:21.779 Total : 129.00 16.12 32263.61 8009.77 63851.83 00:22:21.779 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:21.779 rmmod nvme_tcp 00:22:21.779 rmmod nvme_fabrics 00:22:21.779 rmmod nvme_keyring 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1324640 ']' 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1324640 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1324640 ']' 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1324640 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1324640 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1324640' 00:22:21.779 killing process with pid 1324640 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1324640 00:22:21.779 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1324640 00:22:22.041 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:22.041 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:22.041 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:22.041 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:22:22.041 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:22:22.041 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:22.041 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:22:22.041 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:22.041 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:22.041 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.041 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.041 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.070 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:24.070 00:22:24.070 real 0m12.771s 00:22:24.070 user 0m5.136s 00:22:24.070 sys 0m6.214s 00:22:24.070 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.070 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:24.070 ************************************ 00:22:24.070 END TEST nvmf_wait_for_buf 00:22:24.070 ************************************ 00:22:24.070 16:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:24.070 16:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:24.070 16:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:24.070 16:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:24.070 16:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:22:24.070 16:16:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:32.215 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:32.215 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:32.215 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.215 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:32.216 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:32.216 ************************************ 00:22:32.216 START TEST nvmf_perf_adq 00:22:32.216 ************************************ 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:32.216 * Looking for test storage... 00:22:32.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:32.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.216 --rc genhtml_branch_coverage=1 00:22:32.216 --rc genhtml_function_coverage=1 00:22:32.216 --rc genhtml_legend=1 00:22:32.216 --rc geninfo_all_blocks=1 00:22:32.216 --rc geninfo_unexecuted_blocks=1 00:22:32.216 00:22:32.216 ' 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:32.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.216 --rc genhtml_branch_coverage=1 00:22:32.216 --rc genhtml_function_coverage=1 00:22:32.216 --rc genhtml_legend=1 00:22:32.216 --rc geninfo_all_blocks=1 00:22:32.216 --rc geninfo_unexecuted_blocks=1 00:22:32.216 00:22:32.216 ' 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:32.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.216 --rc genhtml_branch_coverage=1 00:22:32.216 --rc genhtml_function_coverage=1 00:22:32.216 --rc genhtml_legend=1 00:22:32.216 --rc geninfo_all_blocks=1 00:22:32.216 --rc geninfo_unexecuted_blocks=1 00:22:32.216 00:22:32.216 ' 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:32.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.216 --rc genhtml_branch_coverage=1 00:22:32.216 --rc genhtml_function_coverage=1 00:22:32.216 --rc genhtml_legend=1 00:22:32.216 --rc geninfo_all_blocks=1 00:22:32.216 --rc geninfo_unexecuted_blocks=1 00:22:32.216 00:22:32.216 ' 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.216 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:32.217 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.217 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:22:32.217 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:32.217 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:32.217 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.217 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.217 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.217 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:32.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:32.217 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:32.217 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:32.217 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:32.217 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:32.217 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:32.217 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:38.804 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:38.804 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:38.804 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:38.804 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.805 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:38.805 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.805 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:38.805 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.805 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:38.805 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:38.805 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.805 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:38.805 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:38.805 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.805 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:38.805 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.805 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:38.805 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:38.805 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:38.805 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:38.805 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:40.190 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:42.734 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:48.028 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:48.028 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:48.028 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.028 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:48.028 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:48.028 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:48.029 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:48.029 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:48.029 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:48.029 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:48.029 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:48.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:22:48.029 00:22:48.029 --- 10.0.0.2 ping statistics --- 00:22:48.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.030 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:22:48.030 00:22:48.030 --- 10.0.0.1 ping statistics --- 00:22:48.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.030 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1334700 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1334700 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1334700 ']' 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.030 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.030 [2024-11-20 16:17:23.566971] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:22:48.030 [2024-11-20 16:17:23.567037] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.030 [2024-11-20 16:17:23.668269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:48.030 [2024-11-20 16:17:23.723742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.030 [2024-11-20 16:17:23.723796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.030 [2024-11-20 16:17:23.723805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.030 [2024-11-20 16:17:23.723812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.030 [2024-11-20 16:17:23.723818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.030 [2024-11-20 16:17:23.725879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.030 [2024-11-20 16:17:23.726020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.030 [2024-11-20 16:17:23.726200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:48.030 [2024-11-20 16:17:23.726200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.601 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.601 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:48.601 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:48.601 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:48.601 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.601 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.601 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:48.601 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:48.601 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:48.601 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.601 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.601 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.601 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:48.601 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:48.601 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.601 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.601 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.601 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:48.601 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.601 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.863 [2024-11-20 16:17:24.579284] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.863 Malloc1 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.863 [2024-11-20 16:17:24.654366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1334944 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:48.863 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:50.778 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:50.778 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.778 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.778 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.778 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:50.778 "tick_rate": 2400000000, 00:22:50.778 "poll_groups": [ 00:22:50.778 { 00:22:50.778 "name": "nvmf_tgt_poll_group_000", 00:22:50.778 "admin_qpairs": 1, 00:22:50.778 "io_qpairs": 1, 00:22:50.778 "current_admin_qpairs": 1, 00:22:50.778 "current_io_qpairs": 1, 00:22:50.778 "pending_bdev_io": 0, 00:22:50.778 "completed_nvme_io": 16757, 00:22:50.778 "transports": [ 00:22:50.778 { 00:22:50.778 "trtype": "TCP" 00:22:50.778 } 00:22:50.778 ] 00:22:50.778 }, 00:22:50.778 { 00:22:50.778 "name": "nvmf_tgt_poll_group_001", 00:22:50.778 "admin_qpairs": 0, 00:22:50.778 "io_qpairs": 1, 00:22:50.778 "current_admin_qpairs": 0, 00:22:50.778 "current_io_qpairs": 1, 00:22:50.778 "pending_bdev_io": 0, 00:22:50.778 "completed_nvme_io": 18807, 00:22:50.778 "transports": [ 00:22:50.778 { 00:22:50.778 "trtype": "TCP" 00:22:50.778 } 00:22:50.778 ] 00:22:50.778 }, 00:22:50.778 { 00:22:50.778 "name": "nvmf_tgt_poll_group_002", 00:22:50.778 "admin_qpairs": 0, 00:22:50.778 "io_qpairs": 1, 00:22:50.778 "current_admin_qpairs": 0, 00:22:50.778 "current_io_qpairs": 1, 00:22:50.778 "pending_bdev_io": 0, 00:22:50.778 "completed_nvme_io": 18690, 00:22:50.778 "transports": [ 00:22:50.778 { 00:22:50.778 "trtype": "TCP" 00:22:50.778 } 00:22:50.778 ] 00:22:50.778 }, 00:22:50.778 { 00:22:50.778 "name": "nvmf_tgt_poll_group_003", 00:22:50.778 "admin_qpairs": 0, 00:22:50.778 "io_qpairs": 1, 00:22:50.778 "current_admin_qpairs": 0, 00:22:50.778 "current_io_qpairs": 1, 00:22:50.778 "pending_bdev_io": 0, 00:22:50.778 "completed_nvme_io": 17161, 00:22:50.778 "transports": [ 00:22:50.778 { 00:22:50.778 "trtype": "TCP" 00:22:50.778 } 00:22:50.778 ] 00:22:50.778 } 00:22:50.778 ] 00:22:50.778 }' 00:22:50.778 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:50.778 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:51.039 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:51.039 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:51.039 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1334944 00:22:59.169 Initializing NVMe Controllers 00:22:59.169 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:59.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:59.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:59.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:59.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:59.169 Initialization complete. Launching workers. 00:22:59.169 ======================================================== 00:22:59.169 Latency(us) 00:22:59.169 Device Information : IOPS MiB/s Average min max 00:22:59.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13636.60 53.27 4693.45 1338.35 12997.45 00:22:59.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13676.10 53.42 4679.86 1243.09 14521.73 00:22:59.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12774.10 49.90 5009.64 1478.46 12896.38 00:22:59.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12856.30 50.22 4977.30 1175.33 14591.60 00:22:59.169 ======================================================== 00:22:59.169 Total : 52943.10 206.81 4835.16 1175.33 14591.60 00:22:59.169 00:22:59.169 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:59.169 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:59.169 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:59.169 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:59.169 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:59.169 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:59.169 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:59.169 rmmod nvme_tcp 00:22:59.169 rmmod nvme_fabrics 00:22:59.169 rmmod nvme_keyring 00:22:59.169 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:59.169 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:59.169 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:59.169 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1334700 ']' 00:22:59.169 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1334700 00:22:59.169 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1334700 ']' 00:22:59.169 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1334700 00:22:59.169 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:59.169 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.169 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1334700 00:22:59.169 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:59.169 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:59.169 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1334700' 00:22:59.169 killing process with pid 1334700 00:22:59.169 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1334700 00:22:59.169 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1334700 00:22:59.169 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:59.169 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:59.169 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:59.169 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:59.169 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:59.169 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:59.169 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:59.170 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:59.170 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:59.170 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.170 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.170 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.716 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:01.716 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:23:01.716 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:01.716 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:03.101 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:05.015 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:10.307 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:10.307 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.307 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:10.308 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:10.308 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:10.308 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:10.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:23:10.308 00:23:10.308 --- 10.0.0.2 ping statistics --- 00:23:10.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.308 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:10.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:23:10.308 00:23:10.308 --- 10.0.0.1 ping statistics --- 00:23:10.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.308 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:10.308 net.core.busy_poll = 1 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:10.308 net.core.busy_read = 1 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:10.308 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:10.569 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:10.569 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:10.569 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:10.569 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:10.569 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:10.569 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:10.569 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.569 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1339445 00:23:10.569 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1339445 00:23:10.569 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:10.569 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1339445 ']' 00:23:10.569 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.569 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:10.569 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.569 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:10.569 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.831 [2024-11-20 16:17:46.545802] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:23:10.831 [2024-11-20 16:17:46.545869] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.831 [2024-11-20 16:17:46.647563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:10.831 [2024-11-20 16:17:46.701349] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.831 [2024-11-20 16:17:46.701405] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.831 [2024-11-20 16:17:46.701414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.831 [2024-11-20 16:17:46.701421] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.831 [2024-11-20 16:17:46.701428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.831 [2024-11-20 16:17:46.703390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.831 [2024-11-20 16:17:46.703550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.831 [2024-11-20 16:17:46.703713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.831 [2024-11-20 16:17:46.703714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:11.778 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.778 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:11.779 [2024-11-20 16:17:47.563640] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:11.779 Malloc1 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:11.779 [2024-11-20 16:17:47.643393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1339758 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:23:11.779 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:14.329 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:23:14.329 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.329 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:14.329 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.329 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:23:14.329 "tick_rate": 2400000000, 00:23:14.329 "poll_groups": [ 00:23:14.329 { 00:23:14.329 "name": "nvmf_tgt_poll_group_000", 00:23:14.329 "admin_qpairs": 1, 00:23:14.329 "io_qpairs": 2, 00:23:14.329 "current_admin_qpairs": 1, 00:23:14.329 "current_io_qpairs": 2, 00:23:14.329 "pending_bdev_io": 0, 00:23:14.329 "completed_nvme_io": 24766, 00:23:14.329 "transports": [ 00:23:14.329 { 00:23:14.329 "trtype": "TCP" 00:23:14.329 } 00:23:14.329 ] 00:23:14.329 }, 00:23:14.329 { 00:23:14.329 "name": "nvmf_tgt_poll_group_001", 00:23:14.329 "admin_qpairs": 0, 00:23:14.329 "io_qpairs": 2, 00:23:14.329 "current_admin_qpairs": 0, 00:23:14.329 "current_io_qpairs": 2, 00:23:14.329 "pending_bdev_io": 0, 00:23:14.329 "completed_nvme_io": 24992, 00:23:14.329 "transports": [ 00:23:14.329 { 00:23:14.329 "trtype": "TCP" 00:23:14.329 } 00:23:14.329 ] 00:23:14.329 }, 00:23:14.329 { 00:23:14.329 "name": "nvmf_tgt_poll_group_002", 00:23:14.329 "admin_qpairs": 0, 00:23:14.329 "io_qpairs": 0, 00:23:14.329 "current_admin_qpairs": 0, 00:23:14.329 "current_io_qpairs": 0, 00:23:14.329 "pending_bdev_io": 0, 00:23:14.329 "completed_nvme_io": 0, 00:23:14.329 "transports": [ 00:23:14.329 { 00:23:14.329 "trtype": "TCP" 00:23:14.329 } 00:23:14.329 ] 00:23:14.329 }, 00:23:14.329 { 00:23:14.329 "name": "nvmf_tgt_poll_group_003", 00:23:14.329 "admin_qpairs": 0, 00:23:14.329 "io_qpairs": 0, 00:23:14.329 "current_admin_qpairs": 0, 00:23:14.329 "current_io_qpairs": 0, 00:23:14.329 "pending_bdev_io": 0, 00:23:14.329 "completed_nvme_io": 0, 00:23:14.329 "transports": [ 00:23:14.329 { 00:23:14.329 "trtype": "TCP" 00:23:14.329 } 00:23:14.329 ] 00:23:14.329 } 00:23:14.329 ] 00:23:14.329 }' 00:23:14.329 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:14.329 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:23:14.329 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:23:14.329 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:23:14.329 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1339758 00:23:22.467 Initializing NVMe Controllers 00:23:22.467 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:22.467 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:22.467 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:22.467 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:22.467 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:22.467 Initialization complete. Launching workers. 00:23:22.467 ======================================================== 00:23:22.467 Latency(us) 00:23:22.467 Device Information : IOPS MiB/s Average min max 00:23:22.467 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10314.40 40.29 6204.88 1187.65 50451.25 00:23:22.467 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9527.90 37.22 6716.40 1128.92 55494.66 00:23:22.467 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8463.90 33.06 7561.59 992.03 58600.93 00:23:22.467 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8291.60 32.39 7720.16 1038.48 53853.02 00:23:22.467 ======================================================== 00:23:22.467 Total : 36597.80 142.96 6995.11 992.03 58600.93 00:23:22.467 00:23:22.467 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:23:22.467 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:22.467 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:22.467 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:22.467 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:22.467 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:22.467 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:22.467 rmmod nvme_tcp 00:23:22.467 rmmod nvme_fabrics 00:23:22.467 rmmod nvme_keyring 00:23:22.467 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:22.467 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:22.467 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:22.467 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1339445 ']' 00:23:22.467 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1339445 00:23:22.467 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1339445 ']' 00:23:22.467 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1339445 00:23:22.467 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:22.467 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.467 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1339445 00:23:22.467 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:22.467 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:22.468 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1339445' 00:23:22.468 killing process with pid 1339445 00:23:22.468 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1339445 00:23:22.468 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1339445 00:23:22.468 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:22.468 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:22.468 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:22.468 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:22.468 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:22.468 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:22.468 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:22.468 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:22.468 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:22.468 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.468 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.468 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:25.772 00:23:25.772 real 0m53.965s 00:23:25.772 user 2m49.793s 00:23:25.772 sys 0m11.556s 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:25.772 ************************************ 00:23:25.772 END TEST nvmf_perf_adq 00:23:25.772 ************************************ 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:25.772 ************************************ 00:23:25.772 START TEST nvmf_shutdown 00:23:25.772 ************************************ 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:25.772 * Looking for test storage... 00:23:25.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:25.772 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:25.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.772 --rc genhtml_branch_coverage=1 00:23:25.772 --rc genhtml_function_coverage=1 00:23:25.772 --rc genhtml_legend=1 00:23:25.772 --rc geninfo_all_blocks=1 00:23:25.772 --rc geninfo_unexecuted_blocks=1 00:23:25.772 00:23:25.772 ' 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:25.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.773 --rc genhtml_branch_coverage=1 00:23:25.773 --rc genhtml_function_coverage=1 00:23:25.773 --rc genhtml_legend=1 00:23:25.773 --rc geninfo_all_blocks=1 00:23:25.773 --rc geninfo_unexecuted_blocks=1 00:23:25.773 00:23:25.773 ' 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:25.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.773 --rc genhtml_branch_coverage=1 00:23:25.773 --rc genhtml_function_coverage=1 00:23:25.773 --rc genhtml_legend=1 00:23:25.773 --rc geninfo_all_blocks=1 00:23:25.773 --rc geninfo_unexecuted_blocks=1 00:23:25.773 00:23:25.773 ' 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:25.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.773 --rc genhtml_branch_coverage=1 00:23:25.773 --rc genhtml_function_coverage=1 00:23:25.773 --rc genhtml_legend=1 00:23:25.773 --rc geninfo_all_blocks=1 00:23:25.773 --rc geninfo_unexecuted_blocks=1 00:23:25.773 00:23:25.773 ' 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:25.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:25.773 ************************************ 00:23:25.773 START TEST nvmf_shutdown_tc1 00:23:25.773 ************************************ 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:25.773 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:33.936 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:33.937 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:33.937 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:33.937 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:33.937 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:33.937 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.937 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:33.937 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:33.937 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:33.937 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:33.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:23:33.937 00:23:33.937 --- 10.0.0.2 ping statistics --- 00:23:33.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.937 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:23:33.937 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:33.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:23:33.937 00:23:33.937 --- 10.0.0.1 ping statistics --- 00:23:33.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.937 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:23:33.937 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.937 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:23:33.937 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:33.937 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.937 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:33.937 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:33.937 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.937 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:33.937 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:33.937 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:33.937 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:33.937 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.937 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:33.938 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1346400 00:23:33.938 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1346400 00:23:33.938 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:33.938 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1346400 ']' 00:23:33.938 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.938 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.938 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.938 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.938 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:33.938 [2024-11-20 16:18:09.224734] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:23:33.938 [2024-11-20 16:18:09.224803] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.938 [2024-11-20 16:18:09.325216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:33.938 [2024-11-20 16:18:09.377849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.938 [2024-11-20 16:18:09.377901] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.938 [2024-11-20 16:18:09.377909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.938 [2024-11-20 16:18:09.377916] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.938 [2024-11-20 16:18:09.377923] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.938 [2024-11-20 16:18:09.379941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.938 [2024-11-20 16:18:09.380106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:33.938 [2024-11-20 16:18:09.380269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:33.938 [2024-11-20 16:18:09.380431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.200 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.200 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:34.200 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:34.200 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:34.200 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:34.200 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.200 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:34.200 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.200 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:34.200 [2024-11-20 16:18:10.102105] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.200 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.200 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:34.200 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:34.200 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:34.200 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:34.200 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:34.200 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:34.200 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:34.200 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:34.200 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:34.463 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:34.463 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:34.463 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:34.463 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:34.463 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:34.463 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:34.463 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:34.463 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:34.463 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:34.463 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:34.463 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:34.463 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:34.463 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:34.463 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:34.463 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:34.463 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:34.463 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:34.463 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.463 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:34.463 Malloc1 00:23:34.463 [2024-11-20 16:18:10.238062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.463 Malloc2 00:23:34.463 Malloc3 00:23:34.463 Malloc4 00:23:34.724 Malloc5 00:23:34.724 Malloc6 00:23:34.724 Malloc7 00:23:34.724 Malloc8 00:23:34.724 Malloc9 00:23:34.724 Malloc10 00:23:34.724 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1347066 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1347066 /var/tmp/bdevperf.sock 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1347066 ']' 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:34.986 { 00:23:34.986 "params": { 00:23:34.986 "name": "Nvme$subsystem", 00:23:34.986 "trtype": "$TEST_TRANSPORT", 00:23:34.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.986 "adrfam": "ipv4", 00:23:34.986 "trsvcid": "$NVMF_PORT", 00:23:34.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.986 "hdgst": ${hdgst:-false}, 00:23:34.986 "ddgst": ${ddgst:-false} 00:23:34.986 }, 00:23:34.986 "method": "bdev_nvme_attach_controller" 00:23:34.986 } 00:23:34.986 EOF 00:23:34.986 )") 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:34.986 { 00:23:34.986 "params": { 00:23:34.986 "name": "Nvme$subsystem", 00:23:34.986 "trtype": "$TEST_TRANSPORT", 00:23:34.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.986 "adrfam": "ipv4", 00:23:34.986 "trsvcid": "$NVMF_PORT", 00:23:34.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.986 "hdgst": ${hdgst:-false}, 00:23:34.986 "ddgst": ${ddgst:-false} 00:23:34.986 }, 00:23:34.986 "method": "bdev_nvme_attach_controller" 00:23:34.986 } 00:23:34.986 EOF 00:23:34.986 )") 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:34.986 { 00:23:34.986 "params": { 00:23:34.986 "name": "Nvme$subsystem", 00:23:34.986 "trtype": "$TEST_TRANSPORT", 00:23:34.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.986 "adrfam": "ipv4", 00:23:34.986 "trsvcid": "$NVMF_PORT", 00:23:34.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.986 "hdgst": ${hdgst:-false}, 00:23:34.986 "ddgst": ${ddgst:-false} 00:23:34.986 }, 00:23:34.986 "method": "bdev_nvme_attach_controller" 00:23:34.986 } 00:23:34.986 EOF 00:23:34.986 )") 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:34.986 { 00:23:34.986 "params": { 00:23:34.986 "name": "Nvme$subsystem", 00:23:34.986 "trtype": "$TEST_TRANSPORT", 00:23:34.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.986 "adrfam": "ipv4", 00:23:34.986 "trsvcid": "$NVMF_PORT", 00:23:34.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.986 "hdgst": ${hdgst:-false}, 00:23:34.986 "ddgst": ${ddgst:-false} 00:23:34.986 }, 00:23:34.986 "method": "bdev_nvme_attach_controller" 00:23:34.986 } 00:23:34.986 EOF 00:23:34.986 )") 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:34.986 { 00:23:34.986 "params": { 00:23:34.986 "name": "Nvme$subsystem", 00:23:34.986 "trtype": "$TEST_TRANSPORT", 00:23:34.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.986 "adrfam": "ipv4", 00:23:34.986 "trsvcid": "$NVMF_PORT", 00:23:34.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.986 "hdgst": ${hdgst:-false}, 00:23:34.986 "ddgst": ${ddgst:-false} 00:23:34.986 }, 00:23:34.986 "method": "bdev_nvme_attach_controller" 00:23:34.986 } 00:23:34.986 EOF 00:23:34.986 )") 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:34.986 { 00:23:34.986 "params": { 00:23:34.986 "name": "Nvme$subsystem", 00:23:34.986 "trtype": "$TEST_TRANSPORT", 00:23:34.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.986 "adrfam": "ipv4", 00:23:34.986 "trsvcid": "$NVMF_PORT", 00:23:34.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.986 "hdgst": ${hdgst:-false}, 00:23:34.986 "ddgst": ${ddgst:-false} 00:23:34.986 }, 00:23:34.986 "method": "bdev_nvme_attach_controller" 00:23:34.986 } 00:23:34.986 EOF 00:23:34.986 )") 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:34.986 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:34.987 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:34.987 { 00:23:34.987 "params": { 00:23:34.987 "name": "Nvme$subsystem", 00:23:34.987 "trtype": "$TEST_TRANSPORT", 00:23:34.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.987 "adrfam": "ipv4", 00:23:34.987 "trsvcid": "$NVMF_PORT", 00:23:34.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.987 "hdgst": ${hdgst:-false}, 00:23:34.987 "ddgst": ${ddgst:-false} 00:23:34.987 }, 00:23:34.987 "method": "bdev_nvme_attach_controller" 00:23:34.987 } 00:23:34.987 EOF 00:23:34.987 )") 00:23:34.987 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:34.987 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:34.987 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:34.987 { 00:23:34.987 "params": { 00:23:34.987 "name": "Nvme$subsystem", 00:23:34.987 "trtype": "$TEST_TRANSPORT", 00:23:34.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.987 "adrfam": "ipv4", 00:23:34.987 "trsvcid": "$NVMF_PORT", 00:23:34.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.987 "hdgst": ${hdgst:-false}, 00:23:34.987 "ddgst": ${ddgst:-false} 00:23:34.987 }, 00:23:34.987 "method": "bdev_nvme_attach_controller" 00:23:34.987 } 00:23:34.987 EOF 00:23:34.987 )") 00:23:34.987 [2024-11-20 16:18:10.767153] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:23:34.987 [2024-11-20 16:18:10.767235] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:34.987 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:34.987 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:34.987 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:34.987 { 00:23:34.987 "params": { 00:23:34.987 "name": "Nvme$subsystem", 00:23:34.987 "trtype": "$TEST_TRANSPORT", 00:23:34.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.987 "adrfam": "ipv4", 00:23:34.987 "trsvcid": "$NVMF_PORT", 00:23:34.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.987 "hdgst": ${hdgst:-false}, 00:23:34.987 "ddgst": ${ddgst:-false} 00:23:34.987 }, 00:23:34.987 "method": "bdev_nvme_attach_controller" 00:23:34.987 } 00:23:34.987 EOF 00:23:34.987 )") 00:23:34.987 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:34.987 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:34.987 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:34.987 { 00:23:34.987 "params": { 00:23:34.987 "name": "Nvme$subsystem", 00:23:34.987 "trtype": "$TEST_TRANSPORT", 00:23:34.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.987 "adrfam": "ipv4", 00:23:34.987 "trsvcid": "$NVMF_PORT", 00:23:34.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.987 "hdgst": ${hdgst:-false}, 00:23:34.987 "ddgst": ${ddgst:-false} 00:23:34.987 }, 00:23:34.987 "method": "bdev_nvme_attach_controller" 00:23:34.987 } 00:23:34.987 EOF 00:23:34.987 )") 00:23:34.987 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:34.987 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:34.987 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:34.987 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:34.987 "params": { 00:23:34.987 "name": "Nvme1", 00:23:34.987 "trtype": "tcp", 00:23:34.987 "traddr": "10.0.0.2", 00:23:34.987 "adrfam": "ipv4", 00:23:34.987 "trsvcid": "4420", 00:23:34.987 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.987 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:34.987 "hdgst": false, 00:23:34.987 "ddgst": false 00:23:34.987 }, 00:23:34.987 "method": "bdev_nvme_attach_controller" 00:23:34.987 },{ 00:23:34.987 "params": { 00:23:34.987 "name": "Nvme2", 00:23:34.987 "trtype": "tcp", 00:23:34.987 "traddr": "10.0.0.2", 00:23:34.987 "adrfam": "ipv4", 00:23:34.987 "trsvcid": "4420", 00:23:34.987 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:34.987 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:34.987 "hdgst": false, 00:23:34.987 "ddgst": false 00:23:34.987 }, 00:23:34.987 "method": "bdev_nvme_attach_controller" 00:23:34.987 },{ 00:23:34.987 "params": { 00:23:34.987 "name": "Nvme3", 00:23:34.987 "trtype": "tcp", 00:23:34.987 "traddr": "10.0.0.2", 00:23:34.987 "adrfam": "ipv4", 00:23:34.987 "trsvcid": "4420", 00:23:34.987 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:34.987 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:34.987 "hdgst": false, 00:23:34.987 "ddgst": false 00:23:34.987 }, 00:23:34.987 "method": "bdev_nvme_attach_controller" 00:23:34.987 },{ 00:23:34.987 "params": { 00:23:34.987 "name": "Nvme4", 00:23:34.987 "trtype": "tcp", 00:23:34.987 "traddr": "10.0.0.2", 00:23:34.987 "adrfam": "ipv4", 00:23:34.987 "trsvcid": "4420", 00:23:34.987 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:34.987 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:34.987 "hdgst": false, 00:23:34.987 "ddgst": false 00:23:34.987 }, 00:23:34.987 "method": "bdev_nvme_attach_controller" 00:23:34.987 },{ 00:23:34.987 "params": { 00:23:34.987 "name": "Nvme5", 00:23:34.987 "trtype": "tcp", 00:23:34.987 "traddr": "10.0.0.2", 00:23:34.987 "adrfam": "ipv4", 00:23:34.987 "trsvcid": "4420", 00:23:34.987 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:34.987 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:34.987 "hdgst": false, 00:23:34.987 "ddgst": false 00:23:34.987 }, 00:23:34.987 "method": "bdev_nvme_attach_controller" 00:23:34.987 },{ 00:23:34.987 "params": { 00:23:34.987 "name": "Nvme6", 00:23:34.987 "trtype": "tcp", 00:23:34.987 "traddr": "10.0.0.2", 00:23:34.987 "adrfam": "ipv4", 00:23:34.987 "trsvcid": "4420", 00:23:34.987 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:34.987 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:34.987 "hdgst": false, 00:23:34.987 "ddgst": false 00:23:34.987 }, 00:23:34.987 "method": "bdev_nvme_attach_controller" 00:23:34.987 },{ 00:23:34.987 "params": { 00:23:34.987 "name": "Nvme7", 00:23:34.987 "trtype": "tcp", 00:23:34.987 "traddr": "10.0.0.2", 00:23:34.987 "adrfam": "ipv4", 00:23:34.987 "trsvcid": "4420", 00:23:34.987 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:34.987 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:34.987 "hdgst": false, 00:23:34.987 "ddgst": false 00:23:34.987 }, 00:23:34.987 "method": "bdev_nvme_attach_controller" 00:23:34.987 },{ 00:23:34.987 "params": { 00:23:34.987 "name": "Nvme8", 00:23:34.987 "trtype": "tcp", 00:23:34.987 "traddr": "10.0.0.2", 00:23:34.987 "adrfam": "ipv4", 00:23:34.987 "trsvcid": "4420", 00:23:34.987 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:34.987 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:34.987 "hdgst": false, 00:23:34.987 "ddgst": false 00:23:34.987 }, 00:23:34.987 "method": "bdev_nvme_attach_controller" 00:23:34.987 },{ 00:23:34.987 "params": { 00:23:34.987 "name": "Nvme9", 00:23:34.987 "trtype": "tcp", 00:23:34.987 "traddr": "10.0.0.2", 00:23:34.987 "adrfam": "ipv4", 00:23:34.987 "trsvcid": "4420", 00:23:34.987 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:34.987 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:34.987 "hdgst": false, 00:23:34.987 "ddgst": false 00:23:34.987 }, 00:23:34.987 "method": "bdev_nvme_attach_controller" 00:23:34.987 },{ 00:23:34.987 "params": { 00:23:34.987 "name": "Nvme10", 00:23:34.987 "trtype": "tcp", 00:23:34.987 "traddr": "10.0.0.2", 00:23:34.987 "adrfam": "ipv4", 00:23:34.987 "trsvcid": "4420", 00:23:34.987 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:34.987 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:34.987 "hdgst": false, 00:23:34.987 "ddgst": false 00:23:34.987 }, 00:23:34.987 "method": "bdev_nvme_attach_controller" 00:23:34.987 }' 00:23:34.987 [2024-11-20 16:18:10.861348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.987 [2024-11-20 16:18:10.914724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.371 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.371 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:36.371 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:36.371 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.371 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:36.371 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.371 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1347066 00:23:36.371 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:36.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1347066 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:36.371 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1346400 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.313 { 00:23:37.313 "params": { 00:23:37.313 "name": "Nvme$subsystem", 00:23:37.313 "trtype": "$TEST_TRANSPORT", 00:23:37.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.313 "adrfam": "ipv4", 00:23:37.313 "trsvcid": "$NVMF_PORT", 00:23:37.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.313 "hdgst": ${hdgst:-false}, 00:23:37.313 "ddgst": ${ddgst:-false} 00:23:37.313 }, 00:23:37.313 "method": "bdev_nvme_attach_controller" 00:23:37.313 } 00:23:37.313 EOF 00:23:37.313 )") 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.313 { 00:23:37.313 "params": { 00:23:37.313 "name": "Nvme$subsystem", 00:23:37.313 "trtype": "$TEST_TRANSPORT", 00:23:37.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.313 "adrfam": "ipv4", 00:23:37.313 "trsvcid": "$NVMF_PORT", 00:23:37.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.313 "hdgst": ${hdgst:-false}, 00:23:37.313 "ddgst": ${ddgst:-false} 00:23:37.313 }, 00:23:37.313 "method": "bdev_nvme_attach_controller" 00:23:37.313 } 00:23:37.313 EOF 00:23:37.313 )") 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.313 { 00:23:37.313 "params": { 00:23:37.313 "name": "Nvme$subsystem", 00:23:37.313 "trtype": "$TEST_TRANSPORT", 00:23:37.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.313 "adrfam": "ipv4", 00:23:37.313 "trsvcid": "$NVMF_PORT", 00:23:37.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.313 "hdgst": ${hdgst:-false}, 00:23:37.313 "ddgst": ${ddgst:-false} 00:23:37.313 }, 00:23:37.313 "method": "bdev_nvme_attach_controller" 00:23:37.313 } 00:23:37.313 EOF 00:23:37.313 )") 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.313 { 00:23:37.313 "params": { 00:23:37.313 "name": "Nvme$subsystem", 00:23:37.313 "trtype": "$TEST_TRANSPORT", 00:23:37.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.313 "adrfam": "ipv4", 00:23:37.313 "trsvcid": "$NVMF_PORT", 00:23:37.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.313 "hdgst": ${hdgst:-false}, 00:23:37.313 "ddgst": ${ddgst:-false} 00:23:37.313 }, 00:23:37.313 "method": "bdev_nvme_attach_controller" 00:23:37.313 } 00:23:37.313 EOF 00:23:37.313 )") 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.313 { 00:23:37.313 "params": { 00:23:37.313 "name": "Nvme$subsystem", 00:23:37.313 "trtype": "$TEST_TRANSPORT", 00:23:37.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.313 "adrfam": "ipv4", 00:23:37.313 "trsvcid": "$NVMF_PORT", 00:23:37.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.313 "hdgst": ${hdgst:-false}, 00:23:37.313 "ddgst": ${ddgst:-false} 00:23:37.313 }, 00:23:37.313 "method": "bdev_nvme_attach_controller" 00:23:37.313 } 00:23:37.313 EOF 00:23:37.313 )") 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.313 { 00:23:37.313 "params": { 00:23:37.313 "name": "Nvme$subsystem", 00:23:37.313 "trtype": "$TEST_TRANSPORT", 00:23:37.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.313 "adrfam": "ipv4", 00:23:37.313 "trsvcid": "$NVMF_PORT", 00:23:37.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.313 "hdgst": ${hdgst:-false}, 00:23:37.313 "ddgst": ${ddgst:-false} 00:23:37.313 }, 00:23:37.313 "method": "bdev_nvme_attach_controller" 00:23:37.313 } 00:23:37.313 EOF 00:23:37.313 )") 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.313 { 00:23:37.313 "params": { 00:23:37.313 "name": "Nvme$subsystem", 00:23:37.313 "trtype": "$TEST_TRANSPORT", 00:23:37.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.313 "adrfam": "ipv4", 00:23:37.313 "trsvcid": "$NVMF_PORT", 00:23:37.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.313 "hdgst": ${hdgst:-false}, 00:23:37.313 "ddgst": ${ddgst:-false} 00:23:37.313 }, 00:23:37.313 "method": "bdev_nvme_attach_controller" 00:23:37.313 } 00:23:37.313 EOF 00:23:37.313 )") 00:23:37.313 [2024-11-20 16:18:13.148240] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:23:37.313 [2024-11-20 16:18:13.148294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1347556 ] 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.313 { 00:23:37.313 "params": { 00:23:37.313 "name": "Nvme$subsystem", 00:23:37.313 "trtype": "$TEST_TRANSPORT", 00:23:37.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.313 "adrfam": "ipv4", 00:23:37.313 "trsvcid": "$NVMF_PORT", 00:23:37.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.313 "hdgst": ${hdgst:-false}, 00:23:37.313 "ddgst": ${ddgst:-false} 00:23:37.313 }, 00:23:37.313 "method": "bdev_nvme_attach_controller" 00:23:37.313 } 00:23:37.313 EOF 00:23:37.313 )") 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.313 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.313 { 00:23:37.313 "params": { 00:23:37.313 "name": "Nvme$subsystem", 00:23:37.313 "trtype": "$TEST_TRANSPORT", 00:23:37.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.313 "adrfam": "ipv4", 00:23:37.313 "trsvcid": "$NVMF_PORT", 00:23:37.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.313 "hdgst": ${hdgst:-false}, 00:23:37.313 "ddgst": ${ddgst:-false} 00:23:37.313 }, 00:23:37.313 "method": "bdev_nvme_attach_controller" 00:23:37.313 } 00:23:37.313 EOF 00:23:37.313 )") 00:23:37.314 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:37.314 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.314 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.314 { 00:23:37.314 "params": { 00:23:37.314 "name": "Nvme$subsystem", 00:23:37.314 "trtype": "$TEST_TRANSPORT", 00:23:37.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.314 "adrfam": "ipv4", 00:23:37.314 "trsvcid": "$NVMF_PORT", 00:23:37.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.314 "hdgst": ${hdgst:-false}, 00:23:37.314 "ddgst": ${ddgst:-false} 00:23:37.314 }, 00:23:37.314 "method": "bdev_nvme_attach_controller" 00:23:37.314 } 00:23:37.314 EOF 00:23:37.314 )") 00:23:37.314 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:37.314 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:37.314 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:37.314 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:37.314 "params": { 00:23:37.314 "name": "Nvme1", 00:23:37.314 "trtype": "tcp", 00:23:37.314 "traddr": "10.0.0.2", 00:23:37.314 "adrfam": "ipv4", 00:23:37.314 "trsvcid": "4420", 00:23:37.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.314 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:37.314 "hdgst": false, 00:23:37.314 "ddgst": false 00:23:37.314 }, 00:23:37.314 "method": "bdev_nvme_attach_controller" 00:23:37.314 },{ 00:23:37.314 "params": { 00:23:37.314 "name": "Nvme2", 00:23:37.314 "trtype": "tcp", 00:23:37.314 "traddr": "10.0.0.2", 00:23:37.314 "adrfam": "ipv4", 00:23:37.314 "trsvcid": "4420", 00:23:37.314 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:37.314 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:37.314 "hdgst": false, 00:23:37.314 "ddgst": false 00:23:37.314 }, 00:23:37.314 "method": "bdev_nvme_attach_controller" 00:23:37.314 },{ 00:23:37.314 "params": { 00:23:37.314 "name": "Nvme3", 00:23:37.314 "trtype": "tcp", 00:23:37.314 "traddr": "10.0.0.2", 00:23:37.314 "adrfam": "ipv4", 00:23:37.314 "trsvcid": "4420", 00:23:37.314 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:37.314 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:37.314 "hdgst": false, 00:23:37.314 "ddgst": false 00:23:37.314 }, 00:23:37.314 "method": "bdev_nvme_attach_controller" 00:23:37.314 },{ 00:23:37.314 "params": { 00:23:37.314 "name": "Nvme4", 00:23:37.314 "trtype": "tcp", 00:23:37.314 "traddr": "10.0.0.2", 00:23:37.314 "adrfam": "ipv4", 00:23:37.314 "trsvcid": "4420", 00:23:37.314 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:37.314 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:37.314 "hdgst": false, 00:23:37.314 "ddgst": false 00:23:37.314 }, 00:23:37.314 "method": "bdev_nvme_attach_controller" 00:23:37.314 },{ 00:23:37.314 "params": { 00:23:37.314 "name": "Nvme5", 00:23:37.314 "trtype": "tcp", 00:23:37.314 "traddr": "10.0.0.2", 00:23:37.314 "adrfam": "ipv4", 00:23:37.314 "trsvcid": "4420", 00:23:37.314 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:37.314 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:37.314 "hdgst": false, 00:23:37.314 "ddgst": false 00:23:37.314 }, 00:23:37.314 "method": "bdev_nvme_attach_controller" 00:23:37.314 },{ 00:23:37.314 "params": { 00:23:37.314 "name": "Nvme6", 00:23:37.314 "trtype": "tcp", 00:23:37.314 "traddr": "10.0.0.2", 00:23:37.314 "adrfam": "ipv4", 00:23:37.314 "trsvcid": "4420", 00:23:37.314 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:37.314 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:37.314 "hdgst": false, 00:23:37.314 "ddgst": false 00:23:37.314 }, 00:23:37.314 "method": "bdev_nvme_attach_controller" 00:23:37.314 },{ 00:23:37.314 "params": { 00:23:37.314 "name": "Nvme7", 00:23:37.314 "trtype": "tcp", 00:23:37.314 "traddr": "10.0.0.2", 00:23:37.314 "adrfam": "ipv4", 00:23:37.314 "trsvcid": "4420", 00:23:37.314 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:37.314 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:37.314 "hdgst": false, 00:23:37.314 "ddgst": false 00:23:37.314 }, 00:23:37.314 "method": "bdev_nvme_attach_controller" 00:23:37.314 },{ 00:23:37.314 "params": { 00:23:37.314 "name": "Nvme8", 00:23:37.314 "trtype": "tcp", 00:23:37.314 "traddr": "10.0.0.2", 00:23:37.314 "adrfam": "ipv4", 00:23:37.314 "trsvcid": "4420", 00:23:37.314 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:37.314 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:37.314 "hdgst": false, 00:23:37.314 "ddgst": false 00:23:37.314 }, 00:23:37.314 "method": "bdev_nvme_attach_controller" 00:23:37.314 },{ 00:23:37.314 "params": { 00:23:37.314 "name": "Nvme9", 00:23:37.314 "trtype": "tcp", 00:23:37.314 "traddr": "10.0.0.2", 00:23:37.314 "adrfam": "ipv4", 00:23:37.314 "trsvcid": "4420", 00:23:37.314 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:37.314 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:37.314 "hdgst": false, 00:23:37.314 "ddgst": false 00:23:37.314 }, 00:23:37.314 "method": "bdev_nvme_attach_controller" 00:23:37.314 },{ 00:23:37.314 "params": { 00:23:37.314 "name": "Nvme10", 00:23:37.314 "trtype": "tcp", 00:23:37.314 "traddr": "10.0.0.2", 00:23:37.314 "adrfam": "ipv4", 00:23:37.314 "trsvcid": "4420", 00:23:37.314 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:37.314 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:37.314 "hdgst": false, 00:23:37.314 "ddgst": false 00:23:37.314 }, 00:23:37.314 "method": "bdev_nvme_attach_controller" 00:23:37.314 }' 00:23:37.314 [2024-11-20 16:18:13.238472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.574 [2024-11-20 16:18:13.274539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.583 Running I/O for 1 seconds... 00:23:39.822 1864.00 IOPS, 116.50 MiB/s 00:23:39.822 Latency(us) 00:23:39.822 [2024-11-20T15:18:15.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.822 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.822 Verification LBA range: start 0x0 length 0x400 00:23:39.822 Nvme1n1 : 1.12 228.52 14.28 0.00 0.00 277033.39 20316.16 249910.61 00:23:39.822 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.822 Verification LBA range: start 0x0 length 0x400 00:23:39.822 Nvme2n1 : 1.12 227.68 14.23 0.00 0.00 273397.76 16493.23 228939.09 00:23:39.822 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.822 Verification LBA range: start 0x0 length 0x400 00:23:39.823 Nvme3n1 : 1.11 235.01 14.69 0.00 0.00 253425.12 19223.89 232434.35 00:23:39.823 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.823 Verification LBA range: start 0x0 length 0x400 00:23:39.823 Nvme4n1 : 1.08 236.41 14.78 0.00 0.00 253397.33 19551.57 246415.36 00:23:39.823 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.823 Verification LBA range: start 0x0 length 0x400 00:23:39.823 Nvme5n1 : 1.18 271.17 16.95 0.00 0.00 217670.14 21189.97 256901.12 00:23:39.823 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.823 Verification LBA range: start 0x0 length 0x400 00:23:39.823 Nvme6n1 : 1.13 229.28 14.33 0.00 0.00 252303.25 2198.19 246415.36 00:23:39.823 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.823 Verification LBA range: start 0x0 length 0x400 00:23:39.823 Nvme7n1 : 1.13 226.84 14.18 0.00 0.00 250284.80 18896.21 269134.51 00:23:39.823 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.823 Verification LBA range: start 0x0 length 0x400 00:23:39.823 Nvme8n1 : 1.19 272.16 17.01 0.00 0.00 205824.38 2075.31 222822.40 00:23:39.823 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.823 Verification LBA range: start 0x0 length 0x400 00:23:39.823 Nvme9n1 : 1.20 266.08 16.63 0.00 0.00 207252.22 10813.44 255153.49 00:23:39.823 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.823 Verification LBA range: start 0x0 length 0x400 00:23:39.823 Nvme10n1 : 1.20 267.34 16.71 0.00 0.00 202307.24 5570.56 269134.51 00:23:39.823 [2024-11-20T15:18:15.759Z] =================================================================================================================== 00:23:39.823 [2024-11-20T15:18:15.759Z] Total : 2460.50 153.78 0.00 0.00 236478.04 2075.31 269134.51 00:23:39.823 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:39.823 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:39.823 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:39.823 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:39.823 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:39.823 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:39.823 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:23:39.823 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:40.084 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:23:40.084 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:40.084 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:40.084 rmmod nvme_tcp 00:23:40.084 rmmod nvme_fabrics 00:23:40.084 rmmod nvme_keyring 00:23:40.084 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:40.084 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:23:40.084 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:23:40.084 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1346400 ']' 00:23:40.084 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1346400 00:23:40.084 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1346400 ']' 00:23:40.084 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1346400 00:23:40.084 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:23:40.084 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.084 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1346400 00:23:40.084 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:40.084 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:40.084 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1346400' 00:23:40.084 killing process with pid 1346400 00:23:40.084 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1346400 00:23:40.084 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1346400 00:23:40.345 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:40.345 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:40.345 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:40.345 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:23:40.345 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:23:40.345 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:40.345 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:23:40.345 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:40.345 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:40.345 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.345 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.345 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:42.890 00:23:42.890 real 0m16.671s 00:23:42.890 user 0m32.734s 00:23:42.890 sys 0m6.975s 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:42.890 ************************************ 00:23:42.890 END TEST nvmf_shutdown_tc1 00:23:42.890 ************************************ 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:42.890 ************************************ 00:23:42.890 START TEST nvmf_shutdown_tc2 00:23:42.890 ************************************ 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:42.890 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:42.890 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:42.890 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:42.891 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:42.891 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:42.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:42.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:23:42.891 00:23:42.891 --- 10.0.0.2 ping statistics --- 00:23:42.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.891 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:42.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:42.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:23:42.891 00:23:42.891 --- 10.0.0.1 ping statistics --- 00:23:42.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.891 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1348784 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1348784 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1348784 ']' 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:42.891 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:42.891 [2024-11-20 16:18:18.719818] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:23:42.891 [2024-11-20 16:18:18.719863] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.891 [2024-11-20 16:18:18.779176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:42.891 [2024-11-20 16:18:18.808798] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.891 [2024-11-20 16:18:18.808827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.891 [2024-11-20 16:18:18.808833] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.891 [2024-11-20 16:18:18.808838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.891 [2024-11-20 16:18:18.808842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.891 [2024-11-20 16:18:18.810067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.891 [2024-11-20 16:18:18.813175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:42.892 [2024-11-20 16:18:18.813306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:42.892 [2024-11-20 16:18:18.813417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:43.154 [2024-11-20 16:18:18.947751] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.154 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:43.154 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.154 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:43.154 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.154 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:43.154 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:43.154 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.154 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:43.154 Malloc1 00:23:43.154 [2024-11-20 16:18:19.058037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.154 Malloc2 00:23:43.414 Malloc3 00:23:43.414 Malloc4 00:23:43.414 Malloc5 00:23:43.414 Malloc6 00:23:43.414 Malloc7 00:23:43.414 Malloc8 00:23:43.675 Malloc9 00:23:43.675 Malloc10 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1349041 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1349041 /var/tmp/bdevperf.sock 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1349041 ']' 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:43.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:43.675 { 00:23:43.675 "params": { 00:23:43.675 "name": "Nvme$subsystem", 00:23:43.675 "trtype": "$TEST_TRANSPORT", 00:23:43.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.675 "adrfam": "ipv4", 00:23:43.675 "trsvcid": "$NVMF_PORT", 00:23:43.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.675 "hdgst": ${hdgst:-false}, 00:23:43.675 "ddgst": ${ddgst:-false} 00:23:43.675 }, 00:23:43.675 "method": "bdev_nvme_attach_controller" 00:23:43.675 } 00:23:43.675 EOF 00:23:43.675 )") 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:43.675 { 00:23:43.675 "params": { 00:23:43.675 "name": "Nvme$subsystem", 00:23:43.675 "trtype": "$TEST_TRANSPORT", 00:23:43.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.675 "adrfam": "ipv4", 00:23:43.675 "trsvcid": "$NVMF_PORT", 00:23:43.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.675 "hdgst": ${hdgst:-false}, 00:23:43.675 "ddgst": ${ddgst:-false} 00:23:43.675 }, 00:23:43.675 "method": "bdev_nvme_attach_controller" 00:23:43.675 } 00:23:43.675 EOF 00:23:43.675 )") 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:43.675 { 00:23:43.675 "params": { 00:23:43.675 "name": "Nvme$subsystem", 00:23:43.675 "trtype": "$TEST_TRANSPORT", 00:23:43.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.675 "adrfam": "ipv4", 00:23:43.675 "trsvcid": "$NVMF_PORT", 00:23:43.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.675 "hdgst": ${hdgst:-false}, 00:23:43.675 "ddgst": ${ddgst:-false} 00:23:43.675 }, 00:23:43.675 "method": "bdev_nvme_attach_controller" 00:23:43.675 } 00:23:43.675 EOF 00:23:43.675 )") 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:43.675 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:43.675 { 00:23:43.675 "params": { 00:23:43.675 "name": "Nvme$subsystem", 00:23:43.675 "trtype": "$TEST_TRANSPORT", 00:23:43.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.675 "adrfam": "ipv4", 00:23:43.675 "trsvcid": "$NVMF_PORT", 00:23:43.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.676 "hdgst": ${hdgst:-false}, 00:23:43.676 "ddgst": ${ddgst:-false} 00:23:43.676 }, 00:23:43.676 "method": "bdev_nvme_attach_controller" 00:23:43.676 } 00:23:43.676 EOF 00:23:43.676 )") 00:23:43.676 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:43.676 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:43.676 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:43.676 { 00:23:43.676 "params": { 00:23:43.676 "name": "Nvme$subsystem", 00:23:43.676 "trtype": "$TEST_TRANSPORT", 00:23:43.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.676 "adrfam": "ipv4", 00:23:43.676 "trsvcid": "$NVMF_PORT", 00:23:43.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.676 "hdgst": ${hdgst:-false}, 00:23:43.676 "ddgst": ${ddgst:-false} 00:23:43.676 }, 00:23:43.676 "method": "bdev_nvme_attach_controller" 00:23:43.676 } 00:23:43.676 EOF 00:23:43.676 )") 00:23:43.676 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:43.676 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:43.676 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:43.676 { 00:23:43.676 "params": { 00:23:43.676 "name": "Nvme$subsystem", 00:23:43.676 "trtype": "$TEST_TRANSPORT", 00:23:43.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.676 "adrfam": "ipv4", 00:23:43.676 "trsvcid": "$NVMF_PORT", 00:23:43.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.676 "hdgst": ${hdgst:-false}, 00:23:43.676 "ddgst": ${ddgst:-false} 00:23:43.676 }, 00:23:43.676 "method": "bdev_nvme_attach_controller" 00:23:43.676 } 00:23:43.676 EOF 00:23:43.676 )") 00:23:43.676 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:43.676 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:43.676 [2024-11-20 16:18:19.504978] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:23:43.676 [2024-11-20 16:18:19.505034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1349041 ] 00:23:43.676 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:43.676 { 00:23:43.676 "params": { 00:23:43.676 "name": "Nvme$subsystem", 00:23:43.676 "trtype": "$TEST_TRANSPORT", 00:23:43.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.676 "adrfam": "ipv4", 00:23:43.676 "trsvcid": "$NVMF_PORT", 00:23:43.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.676 "hdgst": ${hdgst:-false}, 00:23:43.676 "ddgst": ${ddgst:-false} 00:23:43.676 }, 00:23:43.676 "method": "bdev_nvme_attach_controller" 00:23:43.676 } 00:23:43.676 EOF 00:23:43.676 )") 00:23:43.676 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:43.676 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:43.676 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:43.676 { 00:23:43.676 "params": { 00:23:43.676 "name": "Nvme$subsystem", 00:23:43.676 "trtype": "$TEST_TRANSPORT", 00:23:43.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.676 "adrfam": "ipv4", 00:23:43.676 "trsvcid": "$NVMF_PORT", 00:23:43.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.676 "hdgst": ${hdgst:-false}, 00:23:43.676 "ddgst": ${ddgst:-false} 00:23:43.676 }, 00:23:43.676 "method": "bdev_nvme_attach_controller" 00:23:43.676 } 00:23:43.676 EOF 00:23:43.676 )") 00:23:43.676 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:43.676 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:43.676 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:43.676 { 00:23:43.676 "params": { 00:23:43.676 "name": "Nvme$subsystem", 00:23:43.676 "trtype": "$TEST_TRANSPORT", 00:23:43.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.676 "adrfam": "ipv4", 00:23:43.676 "trsvcid": "$NVMF_PORT", 00:23:43.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.676 "hdgst": ${hdgst:-false}, 00:23:43.676 "ddgst": ${ddgst:-false} 00:23:43.676 }, 00:23:43.676 "method": "bdev_nvme_attach_controller" 00:23:43.676 } 00:23:43.676 EOF 00:23:43.676 )") 00:23:43.676 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:43.676 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:43.676 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:43.676 { 00:23:43.676 "params": { 00:23:43.676 "name": "Nvme$subsystem", 00:23:43.676 "trtype": "$TEST_TRANSPORT", 00:23:43.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.676 "adrfam": "ipv4", 00:23:43.676 "trsvcid": "$NVMF_PORT", 00:23:43.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.676 "hdgst": ${hdgst:-false}, 00:23:43.676 "ddgst": ${ddgst:-false} 00:23:43.676 }, 00:23:43.676 "method": "bdev_nvme_attach_controller" 00:23:43.676 } 00:23:43.676 EOF 00:23:43.676 )") 00:23:43.676 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:43.676 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:23:43.676 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:23:43.676 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:43.676 "params": { 00:23:43.676 "name": "Nvme1", 00:23:43.676 "trtype": "tcp", 00:23:43.676 "traddr": "10.0.0.2", 00:23:43.676 "adrfam": "ipv4", 00:23:43.676 "trsvcid": "4420", 00:23:43.676 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.676 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:43.676 "hdgst": false, 00:23:43.676 "ddgst": false 00:23:43.676 }, 00:23:43.676 "method": "bdev_nvme_attach_controller" 00:23:43.676 },{ 00:23:43.676 "params": { 00:23:43.676 "name": "Nvme2", 00:23:43.676 "trtype": "tcp", 00:23:43.676 "traddr": "10.0.0.2", 00:23:43.676 "adrfam": "ipv4", 00:23:43.676 "trsvcid": "4420", 00:23:43.676 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:43.676 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:43.676 "hdgst": false, 00:23:43.676 "ddgst": false 00:23:43.676 }, 00:23:43.676 "method": "bdev_nvme_attach_controller" 00:23:43.676 },{ 00:23:43.676 "params": { 00:23:43.676 "name": "Nvme3", 00:23:43.676 "trtype": "tcp", 00:23:43.676 "traddr": "10.0.0.2", 00:23:43.676 "adrfam": "ipv4", 00:23:43.676 "trsvcid": "4420", 00:23:43.676 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:43.676 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:43.676 "hdgst": false, 00:23:43.676 "ddgst": false 00:23:43.676 }, 00:23:43.676 "method": "bdev_nvme_attach_controller" 00:23:43.676 },{ 00:23:43.676 "params": { 00:23:43.676 "name": "Nvme4", 00:23:43.676 "trtype": "tcp", 00:23:43.676 "traddr": "10.0.0.2", 00:23:43.676 "adrfam": "ipv4", 00:23:43.676 "trsvcid": "4420", 00:23:43.676 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:43.676 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:43.676 "hdgst": false, 00:23:43.676 "ddgst": false 00:23:43.676 }, 00:23:43.676 "method": "bdev_nvme_attach_controller" 00:23:43.676 },{ 00:23:43.676 "params": { 00:23:43.676 "name": "Nvme5", 00:23:43.676 "trtype": "tcp", 00:23:43.676 "traddr": "10.0.0.2", 00:23:43.676 "adrfam": "ipv4", 00:23:43.676 "trsvcid": "4420", 00:23:43.676 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:43.676 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:43.676 "hdgst": false, 00:23:43.676 "ddgst": false 00:23:43.676 }, 00:23:43.676 "method": "bdev_nvme_attach_controller" 00:23:43.676 },{ 00:23:43.676 "params": { 00:23:43.676 "name": "Nvme6", 00:23:43.676 "trtype": "tcp", 00:23:43.676 "traddr": "10.0.0.2", 00:23:43.676 "adrfam": "ipv4", 00:23:43.676 "trsvcid": "4420", 00:23:43.676 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:43.676 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:43.676 "hdgst": false, 00:23:43.676 "ddgst": false 00:23:43.676 }, 00:23:43.676 "method": "bdev_nvme_attach_controller" 00:23:43.676 },{ 00:23:43.676 "params": { 00:23:43.676 "name": "Nvme7", 00:23:43.676 "trtype": "tcp", 00:23:43.676 "traddr": "10.0.0.2", 00:23:43.676 "adrfam": "ipv4", 00:23:43.677 "trsvcid": "4420", 00:23:43.677 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:43.677 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:43.677 "hdgst": false, 00:23:43.677 "ddgst": false 00:23:43.677 }, 00:23:43.677 "method": "bdev_nvme_attach_controller" 00:23:43.677 },{ 00:23:43.677 "params": { 00:23:43.677 "name": "Nvme8", 00:23:43.677 "trtype": "tcp", 00:23:43.677 "traddr": "10.0.0.2", 00:23:43.677 "adrfam": "ipv4", 00:23:43.677 "trsvcid": "4420", 00:23:43.677 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:43.677 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:43.677 "hdgst": false, 00:23:43.677 "ddgst": false 00:23:43.677 }, 00:23:43.677 "method": "bdev_nvme_attach_controller" 00:23:43.677 },{ 00:23:43.677 "params": { 00:23:43.677 "name": "Nvme9", 00:23:43.677 "trtype": "tcp", 00:23:43.677 "traddr": "10.0.0.2", 00:23:43.677 "adrfam": "ipv4", 00:23:43.677 "trsvcid": "4420", 00:23:43.677 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:43.677 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:43.677 "hdgst": false, 00:23:43.677 "ddgst": false 00:23:43.677 }, 00:23:43.677 "method": "bdev_nvme_attach_controller" 00:23:43.677 },{ 00:23:43.677 "params": { 00:23:43.677 "name": "Nvme10", 00:23:43.677 "trtype": "tcp", 00:23:43.677 "traddr": "10.0.0.2", 00:23:43.677 "adrfam": "ipv4", 00:23:43.677 "trsvcid": "4420", 00:23:43.677 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:43.677 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:43.677 "hdgst": false, 00:23:43.677 "ddgst": false 00:23:43.677 }, 00:23:43.677 "method": "bdev_nvme_attach_controller" 00:23:43.677 }' 00:23:43.677 [2024-11-20 16:18:19.595888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.939 [2024-11-20 16:18:19.632498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.323 Running I/O for 10 seconds... 00:23:45.323 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:45.323 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:45.323 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:45.323 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.323 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:45.584 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.584 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:45.584 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:45.584 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:45.584 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:45.584 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:45.584 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:45.584 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:45.584 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:45.584 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:45.584 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.584 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:45.584 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.584 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:45.584 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:45.584 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:45.846 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:45.846 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:45.846 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:45.846 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:45.846 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.846 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:45.846 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.846 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:45.846 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:45.846 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:46.107 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:46.107 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:46.107 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:46.107 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:46.107 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.107 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:46.107 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.107 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:46.107 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:46.107 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:46.107 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:46.107 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:46.107 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1349041 00:23:46.107 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1349041 ']' 00:23:46.107 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1349041 00:23:46.107 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:46.107 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:46.107 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1349041 00:23:46.107 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:46.107 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:46.107 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1349041' 00:23:46.107 killing process with pid 1349041 00:23:46.107 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1349041 00:23:46.107 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1349041 00:23:46.369 2368.00 IOPS, 148.00 MiB/s [2024-11-20T15:18:22.305Z] Received shutdown signal, test time was about 1.031169 seconds 00:23:46.369 00:23:46.369 Latency(us) 00:23:46.369 [2024-11-20T15:18:22.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.369 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.369 Verification LBA range: start 0x0 length 0x400 00:23:46.369 Nvme1n1 : 0.99 258.30 16.14 0.00 0.00 244984.75 27415.89 262144.00 00:23:46.369 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.369 Verification LBA range: start 0x0 length 0x400 00:23:46.369 Nvme2n1 : 0.98 262.08 16.38 0.00 0.00 236255.15 18568.53 242920.11 00:23:46.369 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.369 Verification LBA range: start 0x0 length 0x400 00:23:46.369 Nvme3n1 : 0.98 260.77 16.30 0.00 0.00 233227.73 19005.44 244667.73 00:23:46.369 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.369 Verification LBA range: start 0x0 length 0x400 00:23:46.369 Nvme4n1 : 0.98 263.79 16.49 0.00 0.00 225095.73 4014.08 244667.73 00:23:46.369 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.369 Verification LBA range: start 0x0 length 0x400 00:23:46.369 Nvme5n1 : 0.98 260.02 16.25 0.00 0.00 224461.87 21408.43 244667.73 00:23:46.369 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.369 Verification LBA range: start 0x0 length 0x400 00:23:46.369 Nvme6n1 : 0.96 200.04 12.50 0.00 0.00 284691.63 24903.68 249910.61 00:23:46.369 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.369 Verification LBA range: start 0x0 length 0x400 00:23:46.369 Nvme7n1 : 0.99 259.53 16.22 0.00 0.00 215136.64 16056.32 237677.23 00:23:46.369 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.369 Verification LBA range: start 0x0 length 0x400 00:23:46.369 Nvme8n1 : 1.03 248.47 15.53 0.00 0.00 211928.32 15837.87 248162.99 00:23:46.369 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.369 Verification LBA range: start 0x0 length 0x400 00:23:46.369 Nvme9n1 : 0.96 199.25 12.45 0.00 0.00 266586.17 14090.24 246415.36 00:23:46.369 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.369 Verification LBA range: start 0x0 length 0x400 00:23:46.369 Nvme10n1 : 0.97 197.97 12.37 0.00 0.00 262971.73 18350.08 267386.88 00:23:46.369 [2024-11-20T15:18:22.305Z] =================================================================================================================== 00:23:46.369 [2024-11-20T15:18:22.305Z] Total : 2410.22 150.64 0.00 0.00 238019.07 4014.08 267386.88 00:23:46.369 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1348784 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:47.755 rmmod nvme_tcp 00:23:47.755 rmmod nvme_fabrics 00:23:47.755 rmmod nvme_keyring 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1348784 ']' 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1348784 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1348784 ']' 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1348784 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1348784 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1348784' 00:23:47.755 killing process with pid 1348784 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1348784 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1348784 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.755 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:50.305 00:23:50.305 real 0m7.449s 00:23:50.305 user 0m22.232s 00:23:50.305 sys 0m1.247s 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:50.305 ************************************ 00:23:50.305 END TEST nvmf_shutdown_tc2 00:23:50.305 ************************************ 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:50.305 ************************************ 00:23:50.305 START TEST nvmf_shutdown_tc3 00:23:50.305 ************************************ 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:50.305 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.305 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:50.306 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:50.306 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:50.306 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:50.306 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:50.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:23:50.306 00:23:50.306 --- 10.0.0.2 ping statistics --- 00:23:50.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.306 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:50.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:23:50.306 00:23:50.306 --- 10.0.0.1 ping statistics --- 00:23:50.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.306 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1350509 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1350509 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1350509 ']' 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.306 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:50.568 [2024-11-20 16:18:26.276341] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:23:50.568 [2024-11-20 16:18:26.276407] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.568 [2024-11-20 16:18:26.372145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:50.568 [2024-11-20 16:18:26.406332] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.568 [2024-11-20 16:18:26.406368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.568 [2024-11-20 16:18:26.406374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.568 [2024-11-20 16:18:26.406379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.568 [2024-11-20 16:18:26.406383] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.568 [2024-11-20 16:18:26.407704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.568 [2024-11-20 16:18:26.407857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:50.568 [2024-11-20 16:18:26.408008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.568 [2024-11-20 16:18:26.408009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:51.511 [2024-11-20 16:18:27.134422] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.511 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:51.511 Malloc1 00:23:51.512 [2024-11-20 16:18:27.240550] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.512 Malloc2 00:23:51.512 Malloc3 00:23:51.512 Malloc4 00:23:51.512 Malloc5 00:23:51.512 Malloc6 00:23:51.774 Malloc7 00:23:51.774 Malloc8 00:23:51.774 Malloc9 00:23:51.774 Malloc10 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1350754 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1350754 /var/tmp/bdevperf.sock 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1350754 ']' 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:51.774 { 00:23:51.774 "params": { 00:23:51.774 "name": "Nvme$subsystem", 00:23:51.774 "trtype": "$TEST_TRANSPORT", 00:23:51.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.774 "adrfam": "ipv4", 00:23:51.774 "trsvcid": "$NVMF_PORT", 00:23:51.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.774 "hdgst": ${hdgst:-false}, 00:23:51.774 "ddgst": ${ddgst:-false} 00:23:51.774 }, 00:23:51.774 "method": "bdev_nvme_attach_controller" 00:23:51.774 } 00:23:51.774 EOF 00:23:51.774 )") 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:51.774 { 00:23:51.774 "params": { 00:23:51.774 "name": "Nvme$subsystem", 00:23:51.774 "trtype": "$TEST_TRANSPORT", 00:23:51.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.774 "adrfam": "ipv4", 00:23:51.774 "trsvcid": "$NVMF_PORT", 00:23:51.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.774 "hdgst": ${hdgst:-false}, 00:23:51.774 "ddgst": ${ddgst:-false} 00:23:51.774 }, 00:23:51.774 "method": "bdev_nvme_attach_controller" 00:23:51.774 } 00:23:51.774 EOF 00:23:51.774 )") 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:51.774 { 00:23:51.774 "params": { 00:23:51.774 "name": "Nvme$subsystem", 00:23:51.774 "trtype": "$TEST_TRANSPORT", 00:23:51.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.774 "adrfam": "ipv4", 00:23:51.774 "trsvcid": "$NVMF_PORT", 00:23:51.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.774 "hdgst": ${hdgst:-false}, 00:23:51.774 "ddgst": ${ddgst:-false} 00:23:51.774 }, 00:23:51.774 "method": "bdev_nvme_attach_controller" 00:23:51.774 } 00:23:51.774 EOF 00:23:51.774 )") 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:51.774 { 00:23:51.774 "params": { 00:23:51.774 "name": "Nvme$subsystem", 00:23:51.774 "trtype": "$TEST_TRANSPORT", 00:23:51.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.774 "adrfam": "ipv4", 00:23:51.774 "trsvcid": "$NVMF_PORT", 00:23:51.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.774 "hdgst": ${hdgst:-false}, 00:23:51.774 "ddgst": ${ddgst:-false} 00:23:51.774 }, 00:23:51.774 "method": "bdev_nvme_attach_controller" 00:23:51.774 } 00:23:51.774 EOF 00:23:51.774 )") 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:51.774 { 00:23:51.774 "params": { 00:23:51.774 "name": "Nvme$subsystem", 00:23:51.774 "trtype": "$TEST_TRANSPORT", 00:23:51.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.774 "adrfam": "ipv4", 00:23:51.774 "trsvcid": "$NVMF_PORT", 00:23:51.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.774 "hdgst": ${hdgst:-false}, 00:23:51.774 "ddgst": ${ddgst:-false} 00:23:51.774 }, 00:23:51.774 "method": "bdev_nvme_attach_controller" 00:23:51.774 } 00:23:51.774 EOF 00:23:51.774 )") 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:51.774 { 00:23:51.774 "params": { 00:23:51.774 "name": "Nvme$subsystem", 00:23:51.774 "trtype": "$TEST_TRANSPORT", 00:23:51.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.774 "adrfam": "ipv4", 00:23:51.774 "trsvcid": "$NVMF_PORT", 00:23:51.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.774 "hdgst": ${hdgst:-false}, 00:23:51.774 "ddgst": ${ddgst:-false} 00:23:51.774 }, 00:23:51.774 "method": "bdev_nvme_attach_controller" 00:23:51.774 } 00:23:51.774 EOF 00:23:51.774 )") 00:23:51.774 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:51.774 [2024-11-20 16:18:27.685830] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:23:51.775 [2024-11-20 16:18:27.685881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1350754 ] 00:23:51.775 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:51.775 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:51.775 { 00:23:51.775 "params": { 00:23:51.775 "name": "Nvme$subsystem", 00:23:51.775 "trtype": "$TEST_TRANSPORT", 00:23:51.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.775 "adrfam": "ipv4", 00:23:51.775 "trsvcid": "$NVMF_PORT", 00:23:51.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.775 "hdgst": ${hdgst:-false}, 00:23:51.775 "ddgst": ${ddgst:-false} 00:23:51.775 }, 00:23:51.775 "method": "bdev_nvme_attach_controller" 00:23:51.775 } 00:23:51.775 EOF 00:23:51.775 )") 00:23:51.775 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:51.775 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:51.775 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:51.775 { 00:23:51.775 "params": { 00:23:51.775 "name": "Nvme$subsystem", 00:23:51.775 "trtype": "$TEST_TRANSPORT", 00:23:51.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.775 "adrfam": "ipv4", 00:23:51.775 "trsvcid": "$NVMF_PORT", 00:23:51.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.775 "hdgst": ${hdgst:-false}, 00:23:51.775 "ddgst": ${ddgst:-false} 00:23:51.775 }, 00:23:51.775 "method": "bdev_nvme_attach_controller" 00:23:51.775 } 00:23:51.775 EOF 00:23:51.775 )") 00:23:51.775 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:51.775 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:51.775 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:51.775 { 00:23:51.775 "params": { 00:23:51.775 "name": "Nvme$subsystem", 00:23:51.775 "trtype": "$TEST_TRANSPORT", 00:23:51.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.775 "adrfam": "ipv4", 00:23:51.775 "trsvcid": "$NVMF_PORT", 00:23:51.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.775 "hdgst": ${hdgst:-false}, 00:23:51.775 "ddgst": ${ddgst:-false} 00:23:51.775 }, 00:23:51.775 "method": "bdev_nvme_attach_controller" 00:23:51.775 } 00:23:51.775 EOF 00:23:51.775 )") 00:23:51.775 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:52.037 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:52.037 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:52.037 { 00:23:52.037 "params": { 00:23:52.037 "name": "Nvme$subsystem", 00:23:52.037 "trtype": "$TEST_TRANSPORT", 00:23:52.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.037 "adrfam": "ipv4", 00:23:52.037 "trsvcid": "$NVMF_PORT", 00:23:52.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.037 "hdgst": ${hdgst:-false}, 00:23:52.037 "ddgst": ${ddgst:-false} 00:23:52.037 }, 00:23:52.037 "method": "bdev_nvme_attach_controller" 00:23:52.037 } 00:23:52.037 EOF 00:23:52.037 )") 00:23:52.037 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:52.037 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:52.037 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:52.037 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:52.037 "params": { 00:23:52.037 "name": "Nvme1", 00:23:52.037 "trtype": "tcp", 00:23:52.037 "traddr": "10.0.0.2", 00:23:52.037 "adrfam": "ipv4", 00:23:52.037 "trsvcid": "4420", 00:23:52.037 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.037 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:52.037 "hdgst": false, 00:23:52.037 "ddgst": false 00:23:52.037 }, 00:23:52.037 "method": "bdev_nvme_attach_controller" 00:23:52.037 },{ 00:23:52.037 "params": { 00:23:52.037 "name": "Nvme2", 00:23:52.037 "trtype": "tcp", 00:23:52.037 "traddr": "10.0.0.2", 00:23:52.037 "adrfam": "ipv4", 00:23:52.037 "trsvcid": "4420", 00:23:52.037 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:52.037 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:52.037 "hdgst": false, 00:23:52.037 "ddgst": false 00:23:52.037 }, 00:23:52.037 "method": "bdev_nvme_attach_controller" 00:23:52.037 },{ 00:23:52.037 "params": { 00:23:52.037 "name": "Nvme3", 00:23:52.037 "trtype": "tcp", 00:23:52.037 "traddr": "10.0.0.2", 00:23:52.037 "adrfam": "ipv4", 00:23:52.037 "trsvcid": "4420", 00:23:52.037 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:52.037 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:52.037 "hdgst": false, 00:23:52.037 "ddgst": false 00:23:52.037 }, 00:23:52.037 "method": "bdev_nvme_attach_controller" 00:23:52.037 },{ 00:23:52.037 "params": { 00:23:52.037 "name": "Nvme4", 00:23:52.037 "trtype": "tcp", 00:23:52.037 "traddr": "10.0.0.2", 00:23:52.037 "adrfam": "ipv4", 00:23:52.037 "trsvcid": "4420", 00:23:52.037 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:52.037 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:52.037 "hdgst": false, 00:23:52.037 "ddgst": false 00:23:52.037 }, 00:23:52.037 "method": "bdev_nvme_attach_controller" 00:23:52.037 },{ 00:23:52.037 "params": { 00:23:52.037 "name": "Nvme5", 00:23:52.037 "trtype": "tcp", 00:23:52.037 "traddr": "10.0.0.2", 00:23:52.037 "adrfam": "ipv4", 00:23:52.037 "trsvcid": "4420", 00:23:52.037 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:52.037 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:52.037 "hdgst": false, 00:23:52.037 "ddgst": false 00:23:52.037 }, 00:23:52.037 "method": "bdev_nvme_attach_controller" 00:23:52.037 },{ 00:23:52.037 "params": { 00:23:52.037 "name": "Nvme6", 00:23:52.037 "trtype": "tcp", 00:23:52.037 "traddr": "10.0.0.2", 00:23:52.037 "adrfam": "ipv4", 00:23:52.037 "trsvcid": "4420", 00:23:52.037 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:52.037 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:52.037 "hdgst": false, 00:23:52.037 "ddgst": false 00:23:52.037 }, 00:23:52.037 "method": "bdev_nvme_attach_controller" 00:23:52.037 },{ 00:23:52.037 "params": { 00:23:52.037 "name": "Nvme7", 00:23:52.037 "trtype": "tcp", 00:23:52.037 "traddr": "10.0.0.2", 00:23:52.037 "adrfam": "ipv4", 00:23:52.037 "trsvcid": "4420", 00:23:52.037 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:52.037 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:52.037 "hdgst": false, 00:23:52.037 "ddgst": false 00:23:52.037 }, 00:23:52.037 "method": "bdev_nvme_attach_controller" 00:23:52.037 },{ 00:23:52.037 "params": { 00:23:52.037 "name": "Nvme8", 00:23:52.037 "trtype": "tcp", 00:23:52.037 "traddr": "10.0.0.2", 00:23:52.037 "adrfam": "ipv4", 00:23:52.037 "trsvcid": "4420", 00:23:52.037 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:52.037 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:52.037 "hdgst": false, 00:23:52.037 "ddgst": false 00:23:52.037 }, 00:23:52.037 "method": "bdev_nvme_attach_controller" 00:23:52.037 },{ 00:23:52.037 "params": { 00:23:52.037 "name": "Nvme9", 00:23:52.037 "trtype": "tcp", 00:23:52.037 "traddr": "10.0.0.2", 00:23:52.037 "adrfam": "ipv4", 00:23:52.037 "trsvcid": "4420", 00:23:52.037 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:52.037 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:52.037 "hdgst": false, 00:23:52.037 "ddgst": false 00:23:52.037 }, 00:23:52.037 "method": "bdev_nvme_attach_controller" 00:23:52.037 },{ 00:23:52.037 "params": { 00:23:52.037 "name": "Nvme10", 00:23:52.037 "trtype": "tcp", 00:23:52.037 "traddr": "10.0.0.2", 00:23:52.037 "adrfam": "ipv4", 00:23:52.037 "trsvcid": "4420", 00:23:52.037 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:52.037 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:52.037 "hdgst": false, 00:23:52.037 "ddgst": false 00:23:52.037 }, 00:23:52.037 "method": "bdev_nvme_attach_controller" 00:23:52.037 }' 00:23:52.037 [2024-11-20 16:18:27.774871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.037 [2024-11-20 16:18:27.811679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.957 Running I/O for 10 seconds... 00:23:53.957 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.957 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:53.957 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:53.957 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.957 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:53.957 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.957 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:53.957 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:53.957 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:53.957 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:53.957 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:53.957 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:53.957 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:53.957 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:53.957 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:53.957 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:53.957 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.957 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:53.957 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.957 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:53.957 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:53.957 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:54.219 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:54.219 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:54.219 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:54.219 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:54.219 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.219 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:54.219 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.219 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:54.219 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:54.219 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:54.497 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:54.497 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:54.497 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:54.497 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:54.497 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.497 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:54.497 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.497 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:54.497 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:54.497 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:54.498 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:54.498 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:54.498 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1350509 00:23:54.498 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1350509 ']' 00:23:54.498 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1350509 00:23:54.498 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:54.498 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.498 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1350509 00:23:54.498 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:54.498 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:54.498 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1350509' 00:23:54.498 killing process with pid 1350509 00:23:54.498 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1350509 00:23:54.498 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1350509 00:23:54.498 [2024-11-20 16:18:30.322087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.322460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0810 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.323610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.323638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.323643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.323648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.323653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.323658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.323663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.323668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.498 [2024-11-20 16:18:30.323673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.323928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee520 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.499 [2024-11-20 16:18:30.325869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.325998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.326004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.326008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.326013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.326018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.326022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.326027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c11d0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.500 [2024-11-20 16:18:30.327320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.327324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.327329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.327334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.327338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.327343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.327348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.327353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.327357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.327362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.327367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.327371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c16c0 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.327971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.327987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.327992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.327997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.328279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1b90 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.329578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.329592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.329597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.329605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.329610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.329614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.329619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.329624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.501 [2024-11-20 16:18:30.329629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.329881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2530 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.330512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.330526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.330531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.330536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.330540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.330545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.330550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.330555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.330559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.330564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.330569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.330573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.502 [2024-11-20 16:18:30.330578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.330784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.333214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.503 [2024-11-20 16:18:30.333249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.503 [2024-11-20 16:18:30.333260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.503 [2024-11-20 16:18:30.333268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.503 [2024-11-20 16:18:30.333276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.503 [2024-11-20 16:18:30.333284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.503 [2024-11-20 16:18:30.333292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.503 [2024-11-20 16:18:30.333300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.503 [2024-11-20 16:18:30.333307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25eb8f0 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.333337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.503 [2024-11-20 16:18:30.333346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.503 [2024-11-20 16:18:30.333354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.503 [2024-11-20 16:18:30.333361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.503 [2024-11-20 16:18:30.333369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.503 [2024-11-20 16:18:30.333376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.503 [2024-11-20 16:18:30.333384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.503 [2024-11-20 16:18:30.333392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.503 [2024-11-20 16:18:30.333398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ce9d0 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.333435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.503 [2024-11-20 16:18:30.333444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.503 [2024-11-20 16:18:30.333456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.503 [2024-11-20 16:18:30.333464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.503 [2024-11-20 16:18:30.333473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.503 [2024-11-20 16:18:30.333480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.503 [2024-11-20 16:18:30.333488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.503 [2024-11-20 16:18:30.333495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.503 [2024-11-20 16:18:30.333502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259dcd0 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.333524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.503 [2024-11-20 16:18:30.333533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.503 [2024-11-20 16:18:30.333541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.503 [2024-11-20 16:18:30.333548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.503 [2024-11-20 16:18:30.333556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.503 [2024-11-20 16:18:30.333563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.503 [2024-11-20 16:18:30.333571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.503 [2024-11-20 16:18:30.333579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.503 [2024-11-20 16:18:30.333586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25df8d0 is same with the state(6) to be set 00:23:54.503 [2024-11-20 16:18:30.333611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.503 [2024-11-20 16:18:30.333619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.503 [2024-11-20 16:18:30.333628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.504 [2024-11-20 16:18:30.333635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.333643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.504 [2024-11-20 16:18:30.333650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.333658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.504 [2024-11-20 16:18:30.333665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.333672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216e050 is same with the state(6) to be set 00:23:54.504 [2024-11-20 16:18:30.333698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.504 [2024-11-20 16:18:30.333707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.333715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.504 [2024-11-20 16:18:30.333722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.333730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.504 [2024-11-20 16:18:30.333738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.333745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.504 [2024-11-20 16:18:30.333752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.333759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2171cb0 is same with the state(6) to be set 00:23:54.504 [2024-11-20 16:18:30.333781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.504 [2024-11-20 16:18:30.333790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.333798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.504 [2024-11-20 16:18:30.333805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.333813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.504 [2024-11-20 16:18:30.333820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.333828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.504 [2024-11-20 16:18:30.333835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.333842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259d0b0 is same with the state(6) to be set 00:23:54.504 [2024-11-20 16:18:30.333866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.504 [2024-11-20 16:18:30.333874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.333883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.504 [2024-11-20 16:18:30.333890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.333898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.504 [2024-11-20 16:18:30.333906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.333913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.504 [2024-11-20 16:18:30.333921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.333930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2171850 is same with the state(6) to be set 00:23:54.504 [2024-11-20 16:18:30.333953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.504 [2024-11-20 16:18:30.333961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.333969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.504 [2024-11-20 16:18:30.333977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.333985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.504 [2024-11-20 16:18:30.333993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.334001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.504 [2024-11-20 16:18:30.334008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.334015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2089610 is same with the state(6) to be set 00:23:54.504 [2024-11-20 16:18:30.335081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.504 [2024-11-20 16:18:30.335104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.335120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.504 [2024-11-20 16:18:30.335128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.335138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.504 [2024-11-20 16:18:30.335145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.335155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.504 [2024-11-20 16:18:30.335169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.335179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.504 [2024-11-20 16:18:30.335186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.335195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.504 [2024-11-20 16:18:30.335203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.335212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.504 [2024-11-20 16:18:30.335220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.335229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.504 [2024-11-20 16:18:30.335240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.335250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.504 [2024-11-20 16:18:30.335257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.335267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.504 [2024-11-20 16:18:30.335274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.335284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.504 [2024-11-20 16:18:30.335291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.335300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.504 [2024-11-20 16:18:30.335308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.335317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.504 [2024-11-20 16:18:30.335324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.335334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.504 [2024-11-20 16:18:30.335341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.335351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.504 [2024-11-20 16:18:30.335358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.335367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.504 [2024-11-20 16:18:30.335375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.335384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.504 [2024-11-20 16:18:30.335391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.504 [2024-11-20 16:18:30.335400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.504 [2024-11-20 16:18:30.335408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.335989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.335997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.336006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.336013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.336022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.336029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.505 [2024-11-20 16:18:30.336038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.505 [2024-11-20 16:18:30.336045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.506 [2024-11-20 16:18:30.336055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.506 [2024-11-20 16:18:30.336063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.506 [2024-11-20 16:18:30.336072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.506 [2024-11-20 16:18:30.336081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.506 [2024-11-20 16:18:30.336091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.506 [2024-11-20 16:18:30.336098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.506 [2024-11-20 16:18:30.336108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.506 [2024-11-20 16:18:30.336115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.506 [2024-11-20 16:18:30.336124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.506 [2024-11-20 16:18:30.336131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.506 [2024-11-20 16:18:30.336141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.506 [2024-11-20 16:18:30.336149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.506 [2024-11-20 16:18:30.336161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.506 [2024-11-20 16:18:30.336168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.506 [2024-11-20 16:18:30.336178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.506 [2024-11-20 16:18:30.336185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.506 [2024-11-20 16:18:30.341379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2a20 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.341995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.506 [2024-11-20 16:18:30.342129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.507 [2024-11-20 16:18:30.342133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.507 [2024-11-20 16:18:30.342138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.507 [2024-11-20 16:18:30.342142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.507 [2024-11-20 16:18:30.342146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.507 [2024-11-20 16:18:30.342151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.507 [2024-11-20 16:18:30.342156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.507 [2024-11-20 16:18:30.342165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.507 [2024-11-20 16:18:30.342169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.507 [2024-11-20 16:18:30.342174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.507 [2024-11-20 16:18:30.342178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.507 [2024-11-20 16:18:30.342182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.507 [2024-11-20 16:18:30.342188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.507 [2024-11-20 16:18:30.342192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.507 [2024-11-20 16:18:30.342197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.507 [2024-11-20 16:18:30.342202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2ef0 is same with the state(6) to be set 00:23:54.507 [2024-11-20 16:18:30.356707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:54.507 [2024-11-20 16:18:30.356753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x259d0b0 (9): Bad file descriptor 00:23:54.507 [2024-11-20 16:18:30.356786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25eb8f0 (9): Bad file descriptor 00:23:54.507 [2024-11-20 16:18:30.356807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25ce9d0 (9): Bad file descriptor 00:23:54.507 [2024-11-20 16:18:30.356842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.507 [2024-11-20 16:18:30.356852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.507 [2024-11-20 16:18:30.356861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.507 [2024-11-20 16:18:30.356869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.507 [2024-11-20 16:18:30.356877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.507 [2024-11-20 16:18:30.356884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.507 [2024-11-20 16:18:30.356893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.507 [2024-11-20 16:18:30.356900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.507 [2024-11-20 16:18:30.356908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25cb910 is same with the state(6) to be set 00:23:54.507 [2024-11-20 16:18:30.356927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x259dcd0 (9): Bad file descriptor 00:23:54.507 [2024-11-20 16:18:30.356945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25df8d0 (9): Bad file descriptor 00:23:54.507 [2024-11-20 16:18:30.356962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216e050 (9): Bad file descriptor 00:23:54.507 [2024-11-20 16:18:30.356978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2171cb0 (9): Bad file descriptor 00:23:54.507 [2024-11-20 16:18:30.356995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2171850 (9): Bad file descriptor 00:23:54.507 [2024-11-20 16:18:30.357011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2089610 (9): Bad file descriptor 00:23:54.507 [2024-11-20 16:18:30.357108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.507 [2024-11-20 16:18:30.357120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.507 [2024-11-20 16:18:30.357132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.507 [2024-11-20 16:18:30.357140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.507 [2024-11-20 16:18:30.357155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.507 [2024-11-20 16:18:30.357170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.507 [2024-11-20 16:18:30.357180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.507 [2024-11-20 16:18:30.357188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.507 [2024-11-20 16:18:30.357198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.507 [2024-11-20 16:18:30.357205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.507 [2024-11-20 16:18:30.357215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.507 [2024-11-20 16:18:30.357222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.507 [2024-11-20 16:18:30.357232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.507 [2024-11-20 16:18:30.357240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.507 [2024-11-20 16:18:30.357249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.507 [2024-11-20 16:18:30.357257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.507 [2024-11-20 16:18:30.357266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.507 [2024-11-20 16:18:30.357273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.507 [2024-11-20 16:18:30.357283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.507 [2024-11-20 16:18:30.357290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.507 [2024-11-20 16:18:30.357300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.507 [2024-11-20 16:18:30.357308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.507 [2024-11-20 16:18:30.357317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.507 [2024-11-20 16:18:30.357324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.507 [2024-11-20 16:18:30.357334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.507 [2024-11-20 16:18:30.357341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.507 [2024-11-20 16:18:30.357352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.507 [2024-11-20 16:18:30.357360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.507 [2024-11-20 16:18:30.357370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.507 [2024-11-20 16:18:30.357379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.507 [2024-11-20 16:18:30.357389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.507 [2024-11-20 16:18:30.357396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.357988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.508 [2024-11-20 16:18:30.357995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.508 [2024-11-20 16:18:30.358004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.358012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.358023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.358030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.358039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.358046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.358056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.358063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.358072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.358079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.358089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.358096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.358105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.358113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.358122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.358129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.358138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.358146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.358155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.358166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.358175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.358183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.358192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.358199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.358208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.358215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.358573] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:54.509 [2024-11-20 16:18:30.358649] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:54.509 [2024-11-20 16:18:30.359844] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:54.509 [2024-11-20 16:18:30.359885] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:54.509 [2024-11-20 16:18:30.359922] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:54.509 [2024-11-20 16:18:30.360313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:54.509 [2024-11-20 16:18:30.360672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.509 [2024-11-20 16:18:30.360689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x259d0b0 with addr=10.0.0.2, port=4420 00:23:54.509 [2024-11-20 16:18:30.360698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259d0b0 is same with the state(6) to be set 00:23:54.509 [2024-11-20 16:18:30.360778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.360792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.360805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.360813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.360823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.360831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.360840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.360848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.360857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.360865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.360874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.360882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.360891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.360898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.360908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.360915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.360925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.360932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.360941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.360953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.360963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.360970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.360979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.360987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.360996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.361004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.361013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.361021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.361030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.361037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.361047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.361054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.361064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.361071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.361080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.361087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.361097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.361104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.361114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.361122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.361132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.361139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.361149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.509 [2024-11-20 16:18:30.361156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.509 [2024-11-20 16:18:30.361175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.510 [2024-11-20 16:18:30.361811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.510 [2024-11-20 16:18:30.361820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.511 [2024-11-20 16:18:30.361827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.511 [2024-11-20 16:18:30.361837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.511 [2024-11-20 16:18:30.361844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.511 [2024-11-20 16:18:30.361853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.511 [2024-11-20 16:18:30.361861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.511 [2024-11-20 16:18:30.361870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.511 [2024-11-20 16:18:30.361878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.511 [2024-11-20 16:18:30.361886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25751a0 is same with the state(6) to be set 00:23:54.511 [2024-11-20 16:18:30.362448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.511 [2024-11-20 16:18:30.362488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2171850 with addr=10.0.0.2, port=4420 00:23:54.511 [2024-11-20 16:18:30.362500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2171850 is same with the state(6) to be set 00:23:54.511 [2024-11-20 16:18:30.362519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x259d0b0 (9): Bad file descriptor 00:23:54.511 [2024-11-20 16:18:30.364099] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:54.511 [2024-11-20 16:18:30.364155] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:54.511 [2024-11-20 16:18:30.364182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:54.511 [2024-11-20 16:18:30.364209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2171850 (9): Bad file descriptor 00:23:54.511 [2024-11-20 16:18:30.364221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:54.511 [2024-11-20 16:18:30.364230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:54.511 [2024-11-20 16:18:30.364241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:54.511 [2024-11-20 16:18:30.364251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:54.511 [2024-11-20 16:18:30.364649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.511 [2024-11-20 16:18:30.364667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089610 with addr=10.0.0.2, port=4420 00:23:54.511 [2024-11-20 16:18:30.364674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2089610 is same with the state(6) to be set 00:23:54.511 [2024-11-20 16:18:30.364682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:54.511 [2024-11-20 16:18:30.364689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:54.511 [2024-11-20 16:18:30.364696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:54.511 [2024-11-20 16:18:30.364703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:54.511 [2024-11-20 16:18:30.365017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2089610 (9): Bad file descriptor 00:23:54.511 [2024-11-20 16:18:30.365065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:54.511 [2024-11-20 16:18:30.365072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:54.511 [2024-11-20 16:18:30.365079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:54.511 [2024-11-20 16:18:30.365086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:54.511 [2024-11-20 16:18:30.366749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25cb910 (9): Bad file descriptor 00:23:54.511 [2024-11-20 16:18:30.366885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.511 [2024-11-20 16:18:30.366897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.511 [2024-11-20 16:18:30.366911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.511 [2024-11-20 16:18:30.366919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.511 [2024-11-20 16:18:30.366929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.511 [2024-11-20 16:18:30.366937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.511 [2024-11-20 16:18:30.366947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.511 [2024-11-20 16:18:30.366954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.511 [2024-11-20 16:18:30.366963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.511 [2024-11-20 16:18:30.366971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.511 [2024-11-20 16:18:30.366981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.511 [2024-11-20 16:18:30.366988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.511 [2024-11-20 16:18:30.366997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.511 [2024-11-20 16:18:30.367005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.511 [2024-11-20 16:18:30.367014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.511 [2024-11-20 16:18:30.367022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.511 [2024-11-20 16:18:30.367031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.511 [2024-11-20 16:18:30.367039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.511 [2024-11-20 16:18:30.367049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.511 [2024-11-20 16:18:30.367056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.511 [2024-11-20 16:18:30.367069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.511 [2024-11-20 16:18:30.367077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.511 [2024-11-20 16:18:30.367086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.511 [2024-11-20 16:18:30.367094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.511 [2024-11-20 16:18:30.367103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.511 [2024-11-20 16:18:30.367110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.511 [2024-11-20 16:18:30.367120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.511 [2024-11-20 16:18:30.367127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.511 [2024-11-20 16:18:30.367137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.511 [2024-11-20 16:18:30.367144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.511 [2024-11-20 16:18:30.367154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.511 [2024-11-20 16:18:30.367170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.511 [2024-11-20 16:18:30.367179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.511 [2024-11-20 16:18:30.367186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.511 [2024-11-20 16:18:30.367196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.511 [2024-11-20 16:18:30.367203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.511 [2024-11-20 16:18:30.367213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.512 [2024-11-20 16:18:30.367874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.512 [2024-11-20 16:18:30.367881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.367890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.367897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.367907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.367915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.367926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.367934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.367943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.367951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.367960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.367967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.367977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.367984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.367992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2375e00 is same with the state(6) to be set 00:23:54.513 [2024-11-20 16:18:30.369270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.513 [2024-11-20 16:18:30.369832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.513 [2024-11-20 16:18:30.369843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.369851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.369860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.369868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.369877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.369884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.369894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.369901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.369911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.369918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.369927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.369935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.369944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.369952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.369961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.369968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.369977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.369984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.369995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.370002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.370011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.370019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.370029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.370036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.370045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.370055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.370064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.370071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.370081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.370088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.370098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.370105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.370114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.370122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.370131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.370139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.370148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.370155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.370168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.370176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.370185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.370192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.370201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.370209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.370218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.370225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.370235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.370242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.370251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.370258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.370269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.370277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.370286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.370293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.370303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.370311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.370320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.370328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.370337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.370345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.370354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.370361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.370369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2616fc0 is same with the state(6) to be set 00:23:54.514 [2024-11-20 16:18:30.371652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.371666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.371678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.371688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.371699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.371708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.371720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.371730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.371740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.371747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.371757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.371764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.371776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.371784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.514 [2024-11-20 16:18:30.371793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.514 [2024-11-20 16:18:30.371801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.371811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.371818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.371828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.371835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.371845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.371852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.371862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.371869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.371879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.371886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.371895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.371903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.371912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.371919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.371929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.371936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.371945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.371953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.371962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.371969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.371979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.371988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.371998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.515 [2024-11-20 16:18:30.372465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.515 [2024-11-20 16:18:30.372474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.372481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.372491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.372498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.372507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.372515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.372524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.372531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.372541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.372549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.372558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.372565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.372575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.372582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.372591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.372599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.372608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.372615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.372625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.372634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.372643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.372651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.372660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.372667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.372677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.372684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.372694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.372701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.372711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.372718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.372727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.372735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.372744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.372752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.372760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26184a0 is same with the state(6) to be set 00:23:54.516 [2024-11-20 16:18:30.374024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.374038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.374051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.374060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.374072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.374081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.374092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.374101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.374112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.374123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.374134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.374143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.374155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.374166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.374175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.374183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.374192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.374199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.374208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.374216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.374225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.374232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.374241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.374249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.374258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.374265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.374275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.374282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.516 [2024-11-20 16:18:30.374291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.516 [2024-11-20 16:18:30.374298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.517 [2024-11-20 16:18:30.374945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.517 [2024-11-20 16:18:30.374952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.374962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.374970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.374979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.374987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.374996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.375003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.375013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.375020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.375029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.375036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.375046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.375053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.375063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.375070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.375079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.375087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.375096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.375103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.375113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.375120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.375128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2572630 is same with the state(6) to be set 00:23:54.518 [2024-11-20 16:18:30.376394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.518 [2024-11-20 16:18:30.376888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.518 [2024-11-20 16:18:30.376898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.376906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.376916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.376923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.376932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.376940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.376949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.376956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.376965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.376973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.376982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.376989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.376998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.377480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.377489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2576730 is same with the state(6) to be set 00:23:54.519 [2024-11-20 16:18:30.378767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.378782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.378794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.378801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.378811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.378818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.378827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.519 [2024-11-20 16:18:30.378835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.519 [2024-11-20 16:18:30.378844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.378851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.378861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.378868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.378877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.378885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.378894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.378901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.378911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.378918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.378927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.378935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.378944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.378951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.378960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.378968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.378977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.378985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.378996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.520 [2024-11-20 16:18:30.379417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.520 [2024-11-20 16:18:30.379426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.521 [2024-11-20 16:18:30.379845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.521 [2024-11-20 16:18:30.379853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25791a0 is same with the state(6) to be set 00:23:54.521 [2024-11-20 16:18:30.381106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:54.521 [2024-11-20 16:18:30.381123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:54.521 [2024-11-20 16:18:30.381132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:54.521 [2024-11-20 16:18:30.381142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:54.521 [2024-11-20 16:18:30.381219] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:54.521 [2024-11-20 16:18:30.381241] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:54.521 [2024-11-20 16:18:30.381317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:54.521 [2024-11-20 16:18:30.381328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:54.521 [2024-11-20 16:18:30.381726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.521 [2024-11-20 16:18:30.381741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2171cb0 with addr=10.0.0.2, port=4420 00:23:54.521 [2024-11-20 16:18:30.381748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2171cb0 is same with the state(6) to be set 00:23:54.521 [2024-11-20 16:18:30.382082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.521 [2024-11-20 16:18:30.382092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x216e050 with addr=10.0.0.2, port=4420 00:23:54.521 [2024-11-20 16:18:30.382100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216e050 is same with the state(6) to be set 00:23:54.521 [2024-11-20 16:18:30.382282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.522 [2024-11-20 16:18:30.382293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x259dcd0 with addr=10.0.0.2, port=4420 00:23:54.522 [2024-11-20 16:18:30.382301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259dcd0 is same with the state(6) to be set 00:23:54.522 [2024-11-20 16:18:30.382641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.522 [2024-11-20 16:18:30.382650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25eb8f0 with addr=10.0.0.2, port=4420 00:23:54.522 [2024-11-20 16:18:30.382658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25eb8f0 is same with the state(6) to be set 00:23:54.522 [2024-11-20 16:18:30.384004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.522 [2024-11-20 16:18:30.384594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.522 [2024-11-20 16:18:30.384601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.384990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.384999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.385006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.385016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.385023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.385033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.385040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.385049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.385056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.385066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.385073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.385082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.523 [2024-11-20 16:18:30.385089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.523 [2024-11-20 16:18:30.385097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2577c70 is same with the state(6) to be set 00:23:54.523 [2024-11-20 16:18:30.387055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:54.523 [2024-11-20 16:18:30.387081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:54.523 [2024-11-20 16:18:30.387095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:54.523 task offset: 24576 on job bdev=Nvme6n1 fails 00:23:54.523 00:23:54.523 Latency(us) 00:23:54.523 [2024-11-20T15:18:30.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.523 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.523 Job: Nvme1n1 ended in about 0.97 seconds with error 00:23:54.523 Verification LBA range: start 0x0 length 0x400 00:23:54.523 Nvme1n1 : 0.97 131.85 8.24 65.93 0.00 320141.37 16165.55 248162.99 00:23:54.523 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.523 Job: Nvme2n1 ended in about 0.96 seconds with error 00:23:54.523 Verification LBA range: start 0x0 length 0x400 00:23:54.523 Nvme2n1 : 0.96 199.71 12.48 66.57 0.00 232989.23 3686.40 251658.24 00:23:54.523 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.523 Job: Nvme3n1 ended in about 0.97 seconds with error 00:23:54.523 Verification LBA range: start 0x0 length 0x400 00:23:54.523 Nvme3n1 : 0.97 197.30 12.33 65.77 0.00 231141.33 16056.32 249910.61 00:23:54.523 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.523 Job: Nvme4n1 ended in about 0.98 seconds with error 00:23:54.523 Verification LBA range: start 0x0 length 0x400 00:23:54.523 Nvme4n1 : 0.98 196.82 12.30 65.61 0.00 227026.56 20097.71 249910.61 00:23:54.523 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.523 Job: Nvme5n1 ended in about 0.98 seconds with error 00:23:54.523 Verification LBA range: start 0x0 length 0x400 00:23:54.523 Nvme5n1 : 0.98 130.90 8.18 65.45 0.00 297296.50 17694.72 258648.75 00:23:54.523 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.523 Job: Nvme6n1 ended in about 0.96 seconds with error 00:23:54.523 Verification LBA range: start 0x0 length 0x400 00:23:54.523 Nvme6n1 : 0.96 200.39 12.52 66.80 0.00 213252.91 20862.29 253405.87 00:23:54.523 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.523 Job: Nvme7n1 ended in about 0.97 seconds with error 00:23:54.523 Verification LBA range: start 0x0 length 0x400 00:23:54.524 Nvme7n1 : 0.97 204.01 12.75 66.28 0.00 206380.33 8301.23 246415.36 00:23:54.524 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.524 Job: Nvme8n1 ended in about 0.98 seconds with error 00:23:54.524 Verification LBA range: start 0x0 length 0x400 00:23:54.524 Nvme8n1 : 0.98 199.95 12.50 65.29 0.00 206055.48 15182.51 230686.72 00:23:54.524 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.524 Job: Nvme9n1 ended in about 0.99 seconds with error 00:23:54.524 Verification LBA range: start 0x0 length 0x400 00:23:54.524 Nvme9n1 : 0.99 194.37 12.15 64.79 0.00 206417.71 20971.52 241172.48 00:23:54.524 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.524 Job: Nvme10n1 ended in about 0.98 seconds with error 00:23:54.524 Verification LBA range: start 0x0 length 0x400 00:23:54.524 Nvme10n1 : 0.98 130.27 8.14 65.13 0.00 267291.31 20206.93 270882.13 00:23:54.524 [2024-11-20T15:18:30.460Z] =================================================================================================================== 00:23:54.524 [2024-11-20T15:18:30.460Z] Total : 1785.57 111.60 657.61 0.00 236297.68 3686.40 270882.13 00:23:54.785 [2024-11-20 16:18:30.414783] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:54.785 [2024-11-20 16:18:30.414814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:54.785 [2024-11-20 16:18:30.415177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.785 [2024-11-20 16:18:30.415201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25df8d0 with addr=10.0.0.2, port=4420 00:23:54.785 [2024-11-20 16:18:30.415211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25df8d0 is same with the state(6) to be set 00:23:54.786 [2024-11-20 16:18:30.415665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.786 [2024-11-20 16:18:30.415676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25ce9d0 with addr=10.0.0.2, port=4420 00:23:54.786 [2024-11-20 16:18:30.415683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ce9d0 is same with the state(6) to be set 00:23:54.786 [2024-11-20 16:18:30.415695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2171cb0 (9): Bad file descriptor 00:23:54.786 [2024-11-20 16:18:30.415707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216e050 (9): Bad file descriptor 00:23:54.786 [2024-11-20 16:18:30.415717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x259dcd0 (9): Bad file descriptor 00:23:54.786 [2024-11-20 16:18:30.415726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25eb8f0 (9): Bad file descriptor 00:23:54.786 [2024-11-20 16:18:30.416076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.786 [2024-11-20 16:18:30.416089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x259d0b0 with addr=10.0.0.2, port=4420 00:23:54.786 [2024-11-20 16:18:30.416097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259d0b0 is same with the state(6) to be set 00:23:54.786 [2024-11-20 16:18:30.416270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.786 [2024-11-20 16:18:30.416281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2171850 with addr=10.0.0.2, port=4420 00:23:54.786 [2024-11-20 16:18:30.416288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2171850 is same with the state(6) to be set 00:23:54.786 [2024-11-20 16:18:30.416629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.786 [2024-11-20 16:18:30.416639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089610 with addr=10.0.0.2, port=4420 00:23:54.786 [2024-11-20 16:18:30.416647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2089610 is same with the state(6) to be set 00:23:54.786 [2024-11-20 16:18:30.416960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.786 [2024-11-20 16:18:30.416970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25cb910 with addr=10.0.0.2, port=4420 00:23:54.786 [2024-11-20 16:18:30.416977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25cb910 is same with the state(6) to be set 00:23:54.786 [2024-11-20 16:18:30.416987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25df8d0 (9): Bad file descriptor 00:23:54.786 [2024-11-20 16:18:30.416996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25ce9d0 (9): Bad file descriptor 00:23:54.786 [2024-11-20 16:18:30.417005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:54.786 [2024-11-20 16:18:30.417012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:54.786 [2024-11-20 16:18:30.417020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:54.786 [2024-11-20 16:18:30.417029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:54.786 [2024-11-20 16:18:30.417036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:54.786 [2024-11-20 16:18:30.417043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:54.786 [2024-11-20 16:18:30.417050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:54.786 [2024-11-20 16:18:30.417056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:54.786 [2024-11-20 16:18:30.417067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:54.786 [2024-11-20 16:18:30.417073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:54.786 [2024-11-20 16:18:30.417080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:54.786 [2024-11-20 16:18:30.417087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:54.786 [2024-11-20 16:18:30.417094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:54.786 [2024-11-20 16:18:30.417100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:54.786 [2024-11-20 16:18:30.417107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:54.786 [2024-11-20 16:18:30.417113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:54.786 [2024-11-20 16:18:30.417167] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:54.786 [2024-11-20 16:18:30.417180] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:54.786 [2024-11-20 16:18:30.417545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x259d0b0 (9): Bad file descriptor 00:23:54.786 [2024-11-20 16:18:30.417558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2171850 (9): Bad file descriptor 00:23:54.786 [2024-11-20 16:18:30.417567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2089610 (9): Bad file descriptor 00:23:54.786 [2024-11-20 16:18:30.417577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25cb910 (9): Bad file descriptor 00:23:54.786 [2024-11-20 16:18:30.417585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:54.786 [2024-11-20 16:18:30.417592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:54.786 [2024-11-20 16:18:30.417598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:54.786 [2024-11-20 16:18:30.417605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:54.786 [2024-11-20 16:18:30.417612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:54.786 [2024-11-20 16:18:30.417618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:54.786 [2024-11-20 16:18:30.417625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:54.786 [2024-11-20 16:18:30.417632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:54.786 [2024-11-20 16:18:30.417668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:54.786 [2024-11-20 16:18:30.417678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:54.786 [2024-11-20 16:18:30.417687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:54.786 [2024-11-20 16:18:30.417695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:54.786 [2024-11-20 16:18:30.417727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:54.786 [2024-11-20 16:18:30.417734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:54.786 [2024-11-20 16:18:30.417741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:54.786 [2024-11-20 16:18:30.417751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:54.786 [2024-11-20 16:18:30.417758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:54.786 [2024-11-20 16:18:30.417765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:54.786 [2024-11-20 16:18:30.417771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:54.786 [2024-11-20 16:18:30.417778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:54.786 [2024-11-20 16:18:30.417785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:54.786 [2024-11-20 16:18:30.417791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:54.786 [2024-11-20 16:18:30.417798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:54.786 [2024-11-20 16:18:30.417805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:54.786 [2024-11-20 16:18:30.417812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:54.786 [2024-11-20 16:18:30.417818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:54.786 [2024-11-20 16:18:30.417825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:54.786 [2024-11-20 16:18:30.417831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:54.786 [2024-11-20 16:18:30.418181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.786 [2024-11-20 16:18:30.418195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25eb8f0 with addr=10.0.0.2, port=4420 00:23:54.786 [2024-11-20 16:18:30.418203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25eb8f0 is same with the state(6) to be set 00:23:54.786 [2024-11-20 16:18:30.418525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.786 [2024-11-20 16:18:30.418535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x259dcd0 with addr=10.0.0.2, port=4420 00:23:54.786 [2024-11-20 16:18:30.418542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259dcd0 is same with the state(6) to be set 00:23:54.786 [2024-11-20 16:18:30.418845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.786 [2024-11-20 16:18:30.418855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x216e050 with addr=10.0.0.2, port=4420 00:23:54.786 [2024-11-20 16:18:30.418862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216e050 is same with the state(6) to be set 00:23:54.786 [2024-11-20 16:18:30.419056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.786 [2024-11-20 16:18:30.419069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2171cb0 with addr=10.0.0.2, port=4420 00:23:54.786 [2024-11-20 16:18:30.419077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2171cb0 is same with the state(6) to be set 00:23:54.786 [2024-11-20 16:18:30.419107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25eb8f0 (9): Bad file descriptor 00:23:54.786 [2024-11-20 16:18:30.419118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x259dcd0 (9): Bad file descriptor 00:23:54.786 [2024-11-20 16:18:30.419127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216e050 (9): Bad file descriptor 00:23:54.787 [2024-11-20 16:18:30.419136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2171cb0 (9): Bad file descriptor 00:23:54.787 [2024-11-20 16:18:30.419182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:54.787 [2024-11-20 16:18:30.419190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:54.787 [2024-11-20 16:18:30.419197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:54.787 [2024-11-20 16:18:30.419204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:54.787 [2024-11-20 16:18:30.419212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:54.787 [2024-11-20 16:18:30.419218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:54.787 [2024-11-20 16:18:30.419225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:54.787 [2024-11-20 16:18:30.419231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:54.787 [2024-11-20 16:18:30.419238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:54.787 [2024-11-20 16:18:30.419245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:54.787 [2024-11-20 16:18:30.419252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:54.787 [2024-11-20 16:18:30.419258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:54.787 [2024-11-20 16:18:30.419265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:54.787 [2024-11-20 16:18:30.419271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:54.787 [2024-11-20 16:18:30.419278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:54.787 [2024-11-20 16:18:30.419285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:54.787 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:55.730 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1350754 00:23:55.730 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:55.730 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1350754 00:23:55.730 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:55.730 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.730 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:55.730 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.731 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1350754 00:23:55.731 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:55.731 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:55.731 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:55.731 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:55.731 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:55.731 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:55.731 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:55.731 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:55.731 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:55.731 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:55.731 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:55.731 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:55.731 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:55.731 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:55.731 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:55.731 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:55.731 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:55.731 rmmod nvme_tcp 00:23:55.731 rmmod nvme_fabrics 00:23:55.731 rmmod nvme_keyring 00:23:55.991 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:55.991 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:55.991 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:55.991 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1350509 ']' 00:23:55.991 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1350509 00:23:55.991 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1350509 ']' 00:23:55.991 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1350509 00:23:55.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1350509) - No such process 00:23:55.991 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1350509 is not found' 00:23:55.991 Process with pid 1350509 is not found 00:23:55.991 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:55.991 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:55.991 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:55.991 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:55.991 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:55.991 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:55.991 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:55.991 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:55.991 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:55.991 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.991 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.991 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.906 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:57.906 00:23:57.906 real 0m7.937s 00:23:57.906 user 0m19.730s 00:23:57.906 sys 0m1.274s 00:23:57.906 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:57.906 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:57.906 ************************************ 00:23:57.906 END TEST nvmf_shutdown_tc3 00:23:57.906 ************************************ 00:23:57.906 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:57.906 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:57.906 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:57.906 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:57.906 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:57.906 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:58.167 ************************************ 00:23:58.167 START TEST nvmf_shutdown_tc4 00:23:58.167 ************************************ 00:23:58.167 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:23:58.167 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:58.167 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:58.167 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:58.167 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.167 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:58.167 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:58.167 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:58.168 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:58.168 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:58.168 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:58.168 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:58.168 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:58.168 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:58.169 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:58.169 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:58.169 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:58.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:58.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:23:58.430 00:23:58.430 --- 10.0.0.2 ping statistics --- 00:23:58.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.430 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:58.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:58.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:23:58.430 00:23:58.430 --- 10.0.0.1 ping statistics --- 00:23:58.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.430 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1352044 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1352044 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1352044 ']' 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:58.430 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:58.430 [2024-11-20 16:18:34.294553] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:23:58.431 [2024-11-20 16:18:34.294617] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.691 [2024-11-20 16:18:34.393491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:58.691 [2024-11-20 16:18:34.433203] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.691 [2024-11-20 16:18:34.433237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.691 [2024-11-20 16:18:34.433244] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.691 [2024-11-20 16:18:34.433249] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.691 [2024-11-20 16:18:34.433254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.691 [2024-11-20 16:18:34.434673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.691 [2024-11-20 16:18:34.434828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:58.691 [2024-11-20 16:18:34.434983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.691 [2024-11-20 16:18:34.434984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:59.261 [2024-11-20 16:18:35.144351] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:59.261 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.262 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:59.262 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.262 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:59.262 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.262 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:59.262 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.262 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:59.522 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.522 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:59.522 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.522 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:59.522 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:59.522 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.522 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:59.522 Malloc1 00:23:59.522 [2024-11-20 16:18:35.250933] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.522 Malloc2 00:23:59.522 Malloc3 00:23:59.522 Malloc4 00:23:59.522 Malloc5 00:23:59.522 Malloc6 00:23:59.782 Malloc7 00:23:59.782 Malloc8 00:23:59.782 Malloc9 00:23:59.782 Malloc10 00:23:59.782 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.782 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:59.782 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:59.782 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:59.782 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1352428 00:23:59.782 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:59.782 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:24:00.041 [2024-11-20 16:18:35.733902] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:05.333 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:05.333 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1352044 00:24:05.333 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1352044 ']' 00:24:05.333 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1352044 00:24:05.333 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:24:05.333 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.333 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1352044 00:24:05.333 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:05.333 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:05.333 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1352044' 00:24:05.333 killing process with pid 1352044 00:24:05.333 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1352044 00:24:05.333 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1352044 00:24:05.333 [2024-11-20 16:18:40.736849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ca350 is same with the state(6) to be set 00:24:05.333 [2024-11-20 16:18:40.736890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ca350 is same with the state(6) to be set 00:24:05.333 [2024-11-20 16:18:40.736896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ca350 is same with the state(6) to be set 00:24:05.333 [2024-11-20 16:18:40.736907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ca350 is same with the state(6) to be set 00:24:05.333 [2024-11-20 16:18:40.736912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ca350 is same with the state(6) to be set 00:24:05.333 [2024-11-20 16:18:40.736917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ca350 is same with the state(6) to be set 00:24:05.333 [2024-11-20 16:18:40.736945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ca820 is same with the state(6) to be set 00:24:05.333 [2024-11-20 16:18:40.736974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ca820 is same with the state(6) to be set 00:24:05.333 [2024-11-20 16:18:40.736980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ca820 is same with the state(6) to be set 00:24:05.333 [2024-11-20 16:18:40.736985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ca820 is same with the state(6) to be set 00:24:05.333 [2024-11-20 16:18:40.737224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cacf0 is same with the state(6) to be set 00:24:05.333 [2024-11-20 16:18:40.737246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cacf0 is same with the state(6) to be set 00:24:05.333 [2024-11-20 16:18:40.737252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cacf0 is same with the state(6) to be set 00:24:05.333 [2024-11-20 16:18:40.737257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cacf0 is same with the state(6) to be set 00:24:05.333 [2024-11-20 16:18:40.737375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c9e80 is same with the state(6) to be set 00:24:05.333 [2024-11-20 16:18:40.737396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c9e80 is same with the state(6) to be set 00:24:05.333 [2024-11-20 16:18:40.737402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c9e80 is same with the state(6) to be set 00:24:05.333 [2024-11-20 16:18:40.737408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c9e80 is same with the state(6) to be set 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 [2024-11-20 16:18:40.737971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c9010 is same with the state(6) to be set 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 [2024-11-20 16:18:40.737981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c9010 is same with the state(6) to be set 00:24:05.334 [2024-11-20 16:18:40.737986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c9010 is same with the state(6) to be set 00:24:05.334 [2024-11-20 16:18:40.737992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c9010 is same with the state(6) to be set 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 [2024-11-20 16:18:40.737996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c9010 is same with the state(6) to be set 00:24:05.334 [2024-11-20 16:18:40.738007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c9010 is same with the state(6) to be set 00:24:05.334 [2024-11-20 16:18:40.738012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c9010 is same with the state(6) to be set 00:24:05.334 [2024-11-20 16:18:40.738017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c9010 is same with the state(6) to be set 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 [2024-11-20 16:18:40.738022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c9010 is same with the state(6) to be set 00:24:05.334 [2024-11-20 16:18:40.738027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c9010 is same with the state(6) to be set 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 [2024-11-20 16:18:40.738213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c94e0 is same with the state(6) to be set 00:24:05.334 [2024-11-20 16:18:40.738223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c94e0 is same with the state(6) to be set 00:24:05.334 [2024-11-20 16:18:40.738228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c94e0 is same with the state(6) to be set 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 [2024-11-20 16:18:40.738402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c99b0 is same with Write completed with error (sct=0, sc=8) 00:24:05.334 the state(6) to be set 00:24:05.334 [2024-11-20 16:18:40.738416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c99b0 is same with the state(6) to be set 00:24:05.334 [2024-11-20 16:18:40.738421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c99b0 is same with the state(6) to be set 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 [2024-11-20 16:18:40.738426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c99b0 is same with the state(6) to be set 00:24:05.334 [2024-11-20 16:18:40.738431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c99b0 is same with the state(6) to be set 00:24:05.334 starting I/O failed: -6 00:24:05.334 [2024-11-20 16:18:40.738435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c99b0 is same with the state(6) to be set 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 [2024-11-20 16:18:40.738441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c99b0 is same with the state(6) to be set 00:24:05.334 [2024-11-20 16:18:40.738446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c99b0 is same with the state(6) to be set 00:24:05.334 starting I/O failed: -6 00:24:05.334 [2024-11-20 16:18:40.738450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c99b0 is same with the state(6) to be set 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 [2024-11-20 16:18:40.738665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c8b40 is same with the state(6) to be set 00:24:05.334 [2024-11-20 16:18:40.738678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c8b40 is same with the state(6) to be set 00:24:05.334 [2024-11-20 16:18:40.738683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c8b40 is same with the state(6) to be set 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 [2024-11-20 16:18:40.738688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c8b40 is same with the state(6) to be set 00:24:05.334 starting I/O failed: -6 00:24:05.334 [2024-11-20 16:18:40.738693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c8b40 is same with the state(6) to be set 00:24:05.334 [2024-11-20 16:18:40.738697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c8b40 is same with the state(6) to be set 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 [2024-11-20 16:18:40.738908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.334 Write completed with error (sct=0, sc=8) 00:24:05.334 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 [2024-11-20 16:18:40.739877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.335 Write completed with error (sct=0, sc=8) 00:24:05.335 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 [2024-11-20 16:18:40.741659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:05.336 NVMe io qpair process completion error 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 [2024-11-20 16:18:40.742697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 [2024-11-20 16:18:40.743499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.336 Write completed with error (sct=0, sc=8) 00:24:05.336 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 [2024-11-20 16:18:40.744433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.337 Write completed with error (sct=0, sc=8) 00:24:05.337 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 [2024-11-20 16:18:40.745851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:05.338 NVMe io qpair process completion error 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 [2024-11-20 16:18:40.746971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 [2024-11-20 16:18:40.747770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.338 starting I/O failed: -6 00:24:05.338 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 [2024-11-20 16:18:40.748689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:05.339 starting I/O failed: -6 00:24:05.339 starting I/O failed: -6 00:24:05.339 starting I/O failed: -6 00:24:05.339 starting I/O failed: -6 00:24:05.339 starting I/O failed: -6 00:24:05.339 starting I/O failed: -6 00:24:05.339 starting I/O failed: -6 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.339 starting I/O failed: -6 00:24:05.339 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 [2024-11-20 16:18:40.751209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:05.340 NVMe io qpair process completion error 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 starting I/O failed: -6 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.340 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 [2024-11-20 16:18:40.753278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 [2024-11-20 16:18:40.754173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.341 Write completed with error (sct=0, sc=8) 00:24:05.341 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 [2024-11-20 16:18:40.755757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:05.342 NVMe io qpair process completion error 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 [2024-11-20 16:18:40.756798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 [2024-11-20 16:18:40.757604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 Write completed with error (sct=0, sc=8) 00:24:05.342 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 [2024-11-20 16:18:40.758523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.343 Write completed with error (sct=0, sc=8) 00:24:05.343 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 [2024-11-20 16:18:40.760186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:05.344 NVMe io qpair process completion error 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 [2024-11-20 16:18:40.761464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.344 starting I/O failed: -6 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 [2024-11-20 16:18:40.762445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 Write completed with error (sct=0, sc=8) 00:24:05.344 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 [2024-11-20 16:18:40.763385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.345 Write completed with error (sct=0, sc=8) 00:24:05.345 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 [2024-11-20 16:18:40.766734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:05.346 NVMe io qpair process completion error 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 [2024-11-20 16:18:40.767973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:05.346 starting I/O failed: -6 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 [2024-11-20 16:18:40.768797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.346 starting I/O failed: -6 00:24:05.346 Write completed with error (sct=0, sc=8) 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 [2024-11-20 16:18:40.769727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.347 Write completed with error (sct=0, sc=8) 00:24:05.347 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 [2024-11-20 16:18:40.771352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:05.348 NVMe io qpair process completion error 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 [2024-11-20 16:18:40.772688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.348 starting I/O failed: -6 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 [2024-11-20 16:18:40.773662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.348 Write completed with error (sct=0, sc=8) 00:24:05.348 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 [2024-11-20 16:18:40.774585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.349 starting I/O failed: -6 00:24:05.349 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 [2024-11-20 16:18:40.776657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:05.350 NVMe io qpair process completion error 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 [2024-11-20 16:18:40.777874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 [2024-11-20 16:18:40.778690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 Write completed with error (sct=0, sc=8) 00:24:05.350 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 [2024-11-20 16:18:40.779624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 [2024-11-20 16:18:40.781470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:05.351 NVMe io qpair process completion error 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 starting I/O failed: -6 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.351 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.352 Write completed with error (sct=0, sc=8) 00:24:05.352 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 [2024-11-20 16:18:40.783731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 Write completed with error (sct=0, sc=8) 00:24:05.353 starting I/O failed: -6 00:24:05.353 [2024-11-20 16:18:40.786753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:05.353 NVMe io qpair process completion error 00:24:05.353 Initializing NVMe Controllers 00:24:05.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:24:05.353 Controller IO queue size 128, less than required. 00:24:05.353 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:05.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:24:05.353 Controller IO queue size 128, less than required. 00:24:05.353 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:05.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:24:05.353 Controller IO queue size 128, less than required. 00:24:05.353 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:05.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:24:05.353 Controller IO queue size 128, less than required. 00:24:05.353 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:05.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:24:05.354 Controller IO queue size 128, less than required. 00:24:05.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:05.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:24:05.354 Controller IO queue size 128, less than required. 00:24:05.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:05.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:24:05.354 Controller IO queue size 128, less than required. 00:24:05.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:05.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:24:05.354 Controller IO queue size 128, less than required. 00:24:05.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:05.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:24:05.354 Controller IO queue size 128, less than required. 00:24:05.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:05.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:05.354 Controller IO queue size 128, less than required. 00:24:05.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:05.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:24:05.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:24:05.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:24:05.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:24:05.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:24:05.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:24:05.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:24:05.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:24:05.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:24:05.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:05.354 Initialization complete. Launching workers. 00:24:05.354 ======================================================== 00:24:05.354 Latency(us) 00:24:05.354 Device Information : IOPS MiB/s Average min max 00:24:05.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1920.91 82.54 66649.19 817.03 118635.84 00:24:05.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1832.81 78.75 69873.66 816.44 152665.19 00:24:05.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1884.90 80.99 67974.45 532.37 120542.14 00:24:05.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1900.55 81.66 67433.93 859.92 118524.30 00:24:05.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1856.18 79.76 69072.09 591.77 120886.30 00:24:05.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1897.33 81.53 67620.41 699.55 128922.75 00:24:05.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1885.76 81.03 68060.96 677.53 120867.18 00:24:05.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1913.41 82.22 67106.97 673.27 121911.15 00:24:05.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1874.83 80.56 68502.88 623.92 135535.67 00:24:05.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1852.75 79.61 68625.30 600.20 122856.81 00:24:05.354 ======================================================== 00:24:05.354 Total : 18819.43 808.65 68079.31 532.37 152665.19 00:24:05.354 00:24:05.354 [2024-11-20 16:18:40.791380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf79900 is same with the state(6) to be set 00:24:05.354 [2024-11-20 16:18:40.791426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf79ae0 is same with the state(6) to be set 00:24:05.354 [2024-11-20 16:18:40.791456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf78410 is same with the state(6) to be set 00:24:05.354 [2024-11-20 16:18:40.791486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf77560 is same with the state(6) to be set 00:24:05.354 [2024-11-20 16:18:40.791515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf78740 is same with the state(6) to be set 00:24:05.354 [2024-11-20 16:18:40.791543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf77890 is same with the state(6) to be set 00:24:05.354 [2024-11-20 16:18:40.791571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf78a70 is same with the state(6) to be set 00:24:05.354 [2024-11-20 16:18:40.791600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf77ef0 is same with the state(6) to be set 00:24:05.354 [2024-11-20 16:18:40.791628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf77bc0 is same with the state(6) to be set 00:24:05.354 [2024-11-20 16:18:40.791657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf79720 is same with the state(6) to be set 00:24:05.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:24:05.354 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:24:06.296 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1352428 00:24:06.296 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:24:06.296 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1352428 00:24:06.296 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:06.296 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.296 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:24:06.296 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.296 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1352428 00:24:06.296 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:24:06.296 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:06.296 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:06.296 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:06.296 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:24:06.296 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:06.296 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:06.296 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:06.296 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:06.296 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:06.296 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:24:06.296 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:06.296 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:24:06.296 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:06.297 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:06.297 rmmod nvme_tcp 00:24:06.297 rmmod nvme_fabrics 00:24:06.297 rmmod nvme_keyring 00:24:06.297 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:06.297 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:24:06.297 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:24:06.297 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1352044 ']' 00:24:06.297 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1352044 00:24:06.297 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1352044 ']' 00:24:06.297 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1352044 00:24:06.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1352044) - No such process 00:24:06.297 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1352044 is not found' 00:24:06.297 Process with pid 1352044 is not found 00:24:06.297 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:06.297 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:06.297 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:06.297 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:24:06.297 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:24:06.297 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:06.297 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:24:06.297 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:06.297 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:06.297 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.297 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.297 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:08.842 00:24:08.842 real 0m10.303s 00:24:08.842 user 0m28.108s 00:24:08.842 sys 0m3.881s 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:08.842 ************************************ 00:24:08.842 END TEST nvmf_shutdown_tc4 00:24:08.842 ************************************ 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:24:08.842 00:24:08.842 real 0m42.941s 00:24:08.842 user 1m43.072s 00:24:08.842 sys 0m13.722s 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:08.842 ************************************ 00:24:08.842 END TEST nvmf_shutdown 00:24:08.842 ************************************ 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:08.842 ************************************ 00:24:08.842 START TEST nvmf_nsid 00:24:08.842 ************************************ 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:08.842 * Looking for test storage... 00:24:08.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.842 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:08.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.843 --rc genhtml_branch_coverage=1 00:24:08.843 --rc genhtml_function_coverage=1 00:24:08.843 --rc genhtml_legend=1 00:24:08.843 --rc geninfo_all_blocks=1 00:24:08.843 --rc geninfo_unexecuted_blocks=1 00:24:08.843 00:24:08.843 ' 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:08.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.843 --rc genhtml_branch_coverage=1 00:24:08.843 --rc genhtml_function_coverage=1 00:24:08.843 --rc genhtml_legend=1 00:24:08.843 --rc geninfo_all_blocks=1 00:24:08.843 --rc geninfo_unexecuted_blocks=1 00:24:08.843 00:24:08.843 ' 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:08.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.843 --rc genhtml_branch_coverage=1 00:24:08.843 --rc genhtml_function_coverage=1 00:24:08.843 --rc genhtml_legend=1 00:24:08.843 --rc geninfo_all_blocks=1 00:24:08.843 --rc geninfo_unexecuted_blocks=1 00:24:08.843 00:24:08.843 ' 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:08.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.843 --rc genhtml_branch_coverage=1 00:24:08.843 --rc genhtml_function_coverage=1 00:24:08.843 --rc genhtml_legend=1 00:24:08.843 --rc geninfo_all_blocks=1 00:24:08.843 --rc geninfo_unexecuted_blocks=1 00:24:08.843 00:24:08.843 ' 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:08.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:08.843 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:08.844 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:24:08.844 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:16.990 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:16.990 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:16.990 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:16.990 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:16.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:24:16.990 00:24:16.990 --- 10.0.0.2 ping statistics --- 00:24:16.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.990 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:24:16.990 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:24:16.991 00:24:16.991 --- 10.0.0.1 ping statistics --- 00:24:16.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.991 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:24:16.991 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.991 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:24:16.991 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:16.991 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.991 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:16.991 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:16.991 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.991 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:16.991 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:16.991 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:24:16.991 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:16.991 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:16.991 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:16.991 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1357786 00:24:16.991 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1357786 00:24:16.991 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:24:16.991 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1357786 ']' 00:24:16.991 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.991 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:16.991 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.991 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:16.991 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:16.991 [2024-11-20 16:18:52.106009] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:24:16.991 [2024-11-20 16:18:52.106076] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.991 [2024-11-20 16:18:52.206432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.991 [2024-11-20 16:18:52.257528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.991 [2024-11-20 16:18:52.257581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.991 [2024-11-20 16:18:52.257590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.991 [2024-11-20 16:18:52.257597] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.991 [2024-11-20 16:18:52.257603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.991 [2024-11-20 16:18:52.258345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.991 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:16.991 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:16.991 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:16.991 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:16.991 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1358126 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=c1f05bb0-8524-4055-ac18-025348e9aff8 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=3ac5aa49-9b62-46f0-bcc6-f90d70ebc9ee 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=f6994f14-962c-4e6d-97d7-8765616f3843 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.252 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:17.252 null0 00:24:17.252 null1 00:24:17.252 null2 00:24:17.252 [2024-11-20 16:18:53.017890] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:24:17.252 [2024-11-20 16:18:53.017961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1358126 ] 00:24:17.252 [2024-11-20 16:18:53.020803] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.252 [2024-11-20 16:18:53.045092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.252 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.252 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1358126 /var/tmp/tgt2.sock 00:24:17.252 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1358126 ']' 00:24:17.253 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:24:17.253 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.253 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:24:17.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:24:17.253 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.253 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:17.253 [2024-11-20 16:18:53.110010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.253 [2024-11-20 16:18:53.162877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.514 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.514 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:17.514 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:24:18.086 [2024-11-20 16:18:53.719509] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.087 [2024-11-20 16:18:53.735692] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:24:18.087 nvme0n1 nvme0n2 00:24:18.087 nvme1n1 00:24:18.087 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:24:18.087 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:24:18.087 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:19.474 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:19.474 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:19.474 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:19.474 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:19.474 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:24:19.474 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:19.474 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:19.474 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:19.474 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:19.474 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:19.474 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:24:19.474 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:24:19.474 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:24:20.419 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:20.419 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:20.419 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:20.419 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:20.419 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:20.419 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid c1f05bb0-8524-4055-ac18-025348e9aff8 00:24:20.419 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:20.419 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:20.419 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:20.419 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:20.419 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:20.419 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c1f05bb085244055ac18025348e9aff8 00:24:20.419 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C1F05BB085244055AC18025348E9AFF8 00:24:20.419 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ C1F05BB085244055AC18025348E9AFF8 == \C\1\F\0\5\B\B\0\8\5\2\4\4\0\5\5\A\C\1\8\0\2\5\3\4\8\E\9\A\F\F\8 ]] 00:24:20.420 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:20.420 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:20.420 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:20.420 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:24:20.420 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:20.420 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:24:20.420 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:20.420 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 3ac5aa49-9b62-46f0-bcc6-f90d70ebc9ee 00:24:20.420 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3ac5aa499b6246f0bcc6f90d70ebc9ee 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3AC5AA499B6246F0BCC6F90D70EBC9EE 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 3AC5AA499B6246F0BCC6F90D70EBC9EE == \3\A\C\5\A\A\4\9\9\B\6\2\4\6\F\0\B\C\C\6\F\9\0\D\7\0\E\B\C\9\E\E ]] 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid f6994f14-962c-4e6d-97d7-8765616f3843 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f6994f14962c4e6d97d78765616f3843 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F6994F14962C4E6D97D78765616F3843 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ F6994F14962C4E6D97D78765616F3843 == \F\6\9\9\4\F\1\4\9\6\2\C\4\E\6\D\9\7\D\7\8\7\6\5\6\1\6\F\3\8\4\3 ]] 00:24:20.680 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:20.942 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:20.942 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:20.942 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1358126 00:24:20.942 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1358126 ']' 00:24:20.942 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1358126 00:24:20.942 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:20.942 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:20.942 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1358126 00:24:20.942 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:20.942 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:20.942 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1358126' 00:24:20.942 killing process with pid 1358126 00:24:20.942 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1358126 00:24:20.942 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1358126 00:24:21.203 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:21.203 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:21.203 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:24:21.203 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:21.203 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:24:21.203 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:21.203 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:21.203 rmmod nvme_tcp 00:24:21.203 rmmod nvme_fabrics 00:24:21.203 rmmod nvme_keyring 00:24:21.203 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:21.203 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:24:21.203 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:24:21.203 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1357786 ']' 00:24:21.203 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1357786 00:24:21.203 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1357786 ']' 00:24:21.203 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1357786 00:24:21.203 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:21.203 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.203 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1357786 00:24:21.203 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:21.203 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:21.203 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1357786' 00:24:21.203 killing process with pid 1357786 00:24:21.203 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1357786 00:24:21.203 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1357786 00:24:21.464 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:21.464 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:21.464 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:21.464 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:24:21.464 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:24:21.464 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:21.464 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:24:21.464 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:21.464 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:21.464 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.464 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.464 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.385 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:23.385 00:24:23.385 real 0m14.974s 00:24:23.385 user 0m11.424s 00:24:23.385 sys 0m6.921s 00:24:23.385 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:23.385 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:23.385 ************************************ 00:24:23.385 END TEST nvmf_nsid 00:24:23.385 ************************************ 00:24:23.385 16:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:23.385 00:24:23.385 real 13m5.438s 00:24:23.385 user 27m20.480s 00:24:23.385 sys 3m57.126s 00:24:23.385 16:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:23.385 16:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:23.385 ************************************ 00:24:23.385 END TEST nvmf_target_extra 00:24:23.385 ************************************ 00:24:23.647 16:18:59 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:23.647 16:18:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:23.647 16:18:59 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:23.647 16:18:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:23.647 ************************************ 00:24:23.647 START TEST nvmf_host 00:24:23.647 ************************************ 00:24:23.647 16:18:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:23.647 * Looking for test storage... 00:24:23.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:23.648 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:23.909 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:23.909 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:23.909 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:23.909 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:23.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.910 --rc genhtml_branch_coverage=1 00:24:23.910 --rc genhtml_function_coverage=1 00:24:23.910 --rc genhtml_legend=1 00:24:23.910 --rc geninfo_all_blocks=1 00:24:23.910 --rc geninfo_unexecuted_blocks=1 00:24:23.910 00:24:23.910 ' 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:23.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.910 --rc genhtml_branch_coverage=1 00:24:23.910 --rc genhtml_function_coverage=1 00:24:23.910 --rc genhtml_legend=1 00:24:23.910 --rc geninfo_all_blocks=1 00:24:23.910 --rc geninfo_unexecuted_blocks=1 00:24:23.910 00:24:23.910 ' 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:23.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.910 --rc genhtml_branch_coverage=1 00:24:23.910 --rc genhtml_function_coverage=1 00:24:23.910 --rc genhtml_legend=1 00:24:23.910 --rc geninfo_all_blocks=1 00:24:23.910 --rc geninfo_unexecuted_blocks=1 00:24:23.910 00:24:23.910 ' 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:23.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.910 --rc genhtml_branch_coverage=1 00:24:23.910 --rc genhtml_function_coverage=1 00:24:23.910 --rc genhtml_legend=1 00:24:23.910 --rc geninfo_all_blocks=1 00:24:23.910 --rc geninfo_unexecuted_blocks=1 00:24:23.910 00:24:23.910 ' 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.910 ************************************ 00:24:23.910 START TEST nvmf_multicontroller 00:24:23.910 ************************************ 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:23.910 * Looking for test storage... 00:24:23.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:24:23.910 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:24.173 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:24.173 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:24.173 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:24.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.174 --rc genhtml_branch_coverage=1 00:24:24.174 --rc genhtml_function_coverage=1 00:24:24.174 --rc genhtml_legend=1 00:24:24.174 --rc geninfo_all_blocks=1 00:24:24.174 --rc geninfo_unexecuted_blocks=1 00:24:24.174 00:24:24.174 ' 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:24.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.174 --rc genhtml_branch_coverage=1 00:24:24.174 --rc genhtml_function_coverage=1 00:24:24.174 --rc genhtml_legend=1 00:24:24.174 --rc geninfo_all_blocks=1 00:24:24.174 --rc geninfo_unexecuted_blocks=1 00:24:24.174 00:24:24.174 ' 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:24.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.174 --rc genhtml_branch_coverage=1 00:24:24.174 --rc genhtml_function_coverage=1 00:24:24.174 --rc genhtml_legend=1 00:24:24.174 --rc geninfo_all_blocks=1 00:24:24.174 --rc geninfo_unexecuted_blocks=1 00:24:24.174 00:24:24.174 ' 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:24.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.174 --rc genhtml_branch_coverage=1 00:24:24.174 --rc genhtml_function_coverage=1 00:24:24.174 --rc genhtml_legend=1 00:24:24.174 --rc geninfo_all_blocks=1 00:24:24.174 --rc geninfo_unexecuted_blocks=1 00:24:24.174 00:24:24.174 ' 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:24.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:24.174 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:24.175 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:24.175 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:24.175 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:24.175 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:24.175 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:24.175 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:24.175 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.175 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.175 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.175 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:24.175 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:24.175 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:24:24.175 16:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:32.320 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:32.321 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:32.321 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:32.321 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:32.321 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:32.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.840 ms 00:24:32.321 00:24:32.321 --- 10.0.0.2 ping statistics --- 00:24:32.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.321 rtt min/avg/max/mdev = 0.840/0.840/0.840/0.000 ms 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:32.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:24:32.321 00:24:32.321 --- 10.0.0.1 ping statistics --- 00:24:32.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.321 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.321 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:24:32.322 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:32.322 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.322 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:32.322 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:32.322 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.322 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:32.322 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:32.322 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:32.322 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:32.322 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:32.322 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:32.322 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1363215 00:24:32.322 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1363215 00:24:32.322 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:32.322 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1363215 ']' 00:24:32.322 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.322 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.322 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.322 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.322 16:19:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:32.322 [2024-11-20 16:19:07.486173] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:24:32.322 [2024-11-20 16:19:07.486242] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.322 [2024-11-20 16:19:07.588247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:32.322 [2024-11-20 16:19:07.639929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.322 [2024-11-20 16:19:07.639983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.322 [2024-11-20 16:19:07.639992] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.322 [2024-11-20 16:19:07.639999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.322 [2024-11-20 16:19:07.640009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.322 [2024-11-20 16:19:07.641863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.322 [2024-11-20 16:19:07.642028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.322 [2024-11-20 16:19:07.642028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:32.583 [2024-11-20 16:19:08.365020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:32.583 Malloc0 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:32.583 [2024-11-20 16:19:08.443028] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:32.583 [2024-11-20 16:19:08.454941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:32.583 Malloc1 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.583 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:32.845 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.845 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:32.845 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.845 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:32.845 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.845 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1363276 00:24:32.845 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:32.845 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:32.845 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1363276 /var/tmp/bdevperf.sock 00:24:32.845 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1363276 ']' 00:24:32.845 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:32.845 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.845 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:32.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:32.845 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.845 16:19:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:33.912 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:33.912 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:33.912 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:33.912 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.912 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:33.912 NVMe0n1 00:24:33.912 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.912 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:33.912 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.912 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:33.912 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:33.912 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.912 1 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:33.913 request: 00:24:33.913 { 00:24:33.913 "name": "NVMe0", 00:24:33.913 "trtype": "tcp", 00:24:33.913 "traddr": "10.0.0.2", 00:24:33.913 "adrfam": "ipv4", 00:24:33.913 "trsvcid": "4420", 00:24:33.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.913 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:33.913 "hostaddr": "10.0.0.1", 00:24:33.913 "prchk_reftag": false, 00:24:33.913 "prchk_guard": false, 00:24:33.913 "hdgst": false, 00:24:33.913 "ddgst": false, 00:24:33.913 "allow_unrecognized_csi": false, 00:24:33.913 "method": "bdev_nvme_attach_controller", 00:24:33.913 "req_id": 1 00:24:33.913 } 00:24:33.913 Got JSON-RPC error response 00:24:33.913 response: 00:24:33.913 { 00:24:33.913 "code": -114, 00:24:33.913 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:33.913 } 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:33.913 request: 00:24:33.913 { 00:24:33.913 "name": "NVMe0", 00:24:33.913 "trtype": "tcp", 00:24:33.913 "traddr": "10.0.0.2", 00:24:33.913 "adrfam": "ipv4", 00:24:33.913 "trsvcid": "4420", 00:24:33.913 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:33.913 "hostaddr": "10.0.0.1", 00:24:33.913 "prchk_reftag": false, 00:24:33.913 "prchk_guard": false, 00:24:33.913 "hdgst": false, 00:24:33.913 "ddgst": false, 00:24:33.913 "allow_unrecognized_csi": false, 00:24:33.913 "method": "bdev_nvme_attach_controller", 00:24:33.913 "req_id": 1 00:24:33.913 } 00:24:33.913 Got JSON-RPC error response 00:24:33.913 response: 00:24:33.913 { 00:24:33.913 "code": -114, 00:24:33.913 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:33.913 } 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:33.913 request: 00:24:33.913 { 00:24:33.913 "name": "NVMe0", 00:24:33.913 "trtype": "tcp", 00:24:33.913 "traddr": "10.0.0.2", 00:24:33.913 "adrfam": "ipv4", 00:24:33.913 "trsvcid": "4420", 00:24:33.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.913 "hostaddr": "10.0.0.1", 00:24:33.913 "prchk_reftag": false, 00:24:33.913 "prchk_guard": false, 00:24:33.913 "hdgst": false, 00:24:33.913 "ddgst": false, 00:24:33.913 "multipath": "disable", 00:24:33.913 "allow_unrecognized_csi": false, 00:24:33.913 "method": "bdev_nvme_attach_controller", 00:24:33.913 "req_id": 1 00:24:33.913 } 00:24:33.913 Got JSON-RPC error response 00:24:33.913 response: 00:24:33.913 { 00:24:33.913 "code": -114, 00:24:33.913 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:33.913 } 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:33.913 request: 00:24:33.913 { 00:24:33.913 "name": "NVMe0", 00:24:33.913 "trtype": "tcp", 00:24:33.913 "traddr": "10.0.0.2", 00:24:33.913 "adrfam": "ipv4", 00:24:33.913 "trsvcid": "4420", 00:24:33.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.913 "hostaddr": "10.0.0.1", 00:24:33.913 "prchk_reftag": false, 00:24:33.913 "prchk_guard": false, 00:24:33.913 "hdgst": false, 00:24:33.913 "ddgst": false, 00:24:33.913 "multipath": "failover", 00:24:33.913 "allow_unrecognized_csi": false, 00:24:33.913 "method": "bdev_nvme_attach_controller", 00:24:33.913 "req_id": 1 00:24:33.913 } 00:24:33.913 Got JSON-RPC error response 00:24:33.913 response: 00:24:33.913 { 00:24:33.913 "code": -114, 00:24:33.913 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:33.913 } 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:33.913 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:33.914 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:33.914 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:33.914 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:33.914 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.914 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:33.914 NVMe0n1 00:24:33.914 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.914 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:33.914 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.914 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:33.914 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.914 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:33.914 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.914 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:33.914 00:24:33.914 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.220 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:34.220 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:34.220 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.220 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.220 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.220 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:34.220 16:19:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:35.162 { 00:24:35.162 "results": [ 00:24:35.162 { 00:24:35.162 "job": "NVMe0n1", 00:24:35.162 "core_mask": "0x1", 00:24:35.162 "workload": "write", 00:24:35.162 "status": "finished", 00:24:35.162 "queue_depth": 128, 00:24:35.162 "io_size": 4096, 00:24:35.162 "runtime": 1.006578, 00:24:35.162 "iops": 26288.077029301257, 00:24:35.162 "mibps": 102.68780089570804, 00:24:35.162 "io_failed": 0, 00:24:35.162 "io_timeout": 0, 00:24:35.162 "avg_latency_us": 4857.267112605974, 00:24:35.162 "min_latency_us": 2102.6133333333332, 00:24:35.162 "max_latency_us": 17148.586666666666 00:24:35.162 } 00:24:35.162 ], 00:24:35.162 "core_count": 1 00:24:35.162 } 00:24:35.162 16:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:35.162 16:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.162 16:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.162 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.162 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:35.162 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1363276 00:24:35.162 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1363276 ']' 00:24:35.162 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1363276 00:24:35.162 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:35.162 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:35.162 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1363276 00:24:35.162 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:35.162 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:35.162 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1363276' 00:24:35.162 killing process with pid 1363276 00:24:35.162 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1363276 00:24:35.162 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1363276 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:24:35.424 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:35.424 [2024-11-20 16:19:08.584401] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:24:35.424 [2024-11-20 16:19:08.584482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1363276 ] 00:24:35.424 [2024-11-20 16:19:08.677799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.424 [2024-11-20 16:19:08.732156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.424 [2024-11-20 16:19:09.839277] bdev.c:4697:bdev_name_add: *ERROR*: Bdev name e09f3c28-c95f-4110-87ac-2778f4232112 already exists 00:24:35.424 [2024-11-20 16:19:09.839322] bdev.c:7898:bdev_register: *ERROR*: Unable to add uuid:e09f3c28-c95f-4110-87ac-2778f4232112 alias for bdev NVMe1n1 00:24:35.424 [2024-11-20 16:19:09.839332] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:35.424 Running I/O for 1 seconds... 00:24:35.424 26269.00 IOPS, 102.61 MiB/s 00:24:35.424 Latency(us) 00:24:35.424 [2024-11-20T15:19:11.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.424 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:35.424 NVMe0n1 : 1.01 26288.08 102.69 0.00 0.00 4857.27 2102.61 17148.59 00:24:35.424 [2024-11-20T15:19:11.360Z] =================================================================================================================== 00:24:35.424 [2024-11-20T15:19:11.360Z] Total : 26288.08 102.69 0.00 0.00 4857.27 2102.61 17148.59 00:24:35.424 Received shutdown signal, test time was about 1.000000 seconds 00:24:35.424 00:24:35.424 Latency(us) 00:24:35.424 [2024-11-20T15:19:11.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.424 [2024-11-20T15:19:11.360Z] =================================================================================================================== 00:24:35.424 [2024-11-20T15:19:11.360Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:35.424 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:35.424 rmmod nvme_tcp 00:24:35.424 rmmod nvme_fabrics 00:24:35.424 rmmod nvme_keyring 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1363215 ']' 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1363215 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1363215 ']' 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1363215 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:35.424 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1363215 00:24:35.685 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:35.685 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:35.685 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1363215' 00:24:35.685 killing process with pid 1363215 00:24:35.685 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1363215 00:24:35.685 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1363215 00:24:35.685 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:35.685 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:35.685 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:35.685 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:35.685 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:35.685 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:35.685 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:35.685 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:35.685 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:35.685 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.685 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.685 16:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.234 16:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:38.234 00:24:38.234 real 0m13.921s 00:24:38.234 user 0m16.827s 00:24:38.234 sys 0m6.494s 00:24:38.234 16:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:38.234 16:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:38.234 ************************************ 00:24:38.234 END TEST nvmf_multicontroller 00:24:38.234 ************************************ 00:24:38.234 16:19:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:38.234 16:19:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:38.234 16:19:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.235 ************************************ 00:24:38.235 START TEST nvmf_aer 00:24:38.235 ************************************ 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:38.235 * Looking for test storage... 00:24:38.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:38.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.235 --rc genhtml_branch_coverage=1 00:24:38.235 --rc genhtml_function_coverage=1 00:24:38.235 --rc genhtml_legend=1 00:24:38.235 --rc geninfo_all_blocks=1 00:24:38.235 --rc geninfo_unexecuted_blocks=1 00:24:38.235 00:24:38.235 ' 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:38.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.235 --rc genhtml_branch_coverage=1 00:24:38.235 --rc genhtml_function_coverage=1 00:24:38.235 --rc genhtml_legend=1 00:24:38.235 --rc geninfo_all_blocks=1 00:24:38.235 --rc geninfo_unexecuted_blocks=1 00:24:38.235 00:24:38.235 ' 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:38.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.235 --rc genhtml_branch_coverage=1 00:24:38.235 --rc genhtml_function_coverage=1 00:24:38.235 --rc genhtml_legend=1 00:24:38.235 --rc geninfo_all_blocks=1 00:24:38.235 --rc geninfo_unexecuted_blocks=1 00:24:38.235 00:24:38.235 ' 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:38.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.235 --rc genhtml_branch_coverage=1 00:24:38.235 --rc genhtml_function_coverage=1 00:24:38.235 --rc genhtml_legend=1 00:24:38.235 --rc geninfo_all_blocks=1 00:24:38.235 --rc geninfo_unexecuted_blocks=1 00:24:38.235 00:24:38.235 ' 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:38.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:38.235 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:38.236 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:38.236 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:38.236 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:38.236 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:38.236 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:38.236 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:38.236 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:38.236 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:38.236 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.236 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.236 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.236 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:38.236 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:38.236 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:24:38.236 16:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:46.378 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:46.378 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:46.378 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:46.379 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:46.379 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:46.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:24:46.379 00:24:46.379 --- 10.0.0.2 ping statistics --- 00:24:46.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.379 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:46.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:24:46.379 00:24:46.379 --- 10.0.0.1 ping statistics --- 00:24:46.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.379 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1368063 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1368063 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1368063 ']' 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:46.379 16:19:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:46.379 [2024-11-20 16:19:21.489193] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:24:46.379 [2024-11-20 16:19:21.489258] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.379 [2024-11-20 16:19:21.588829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:46.379 [2024-11-20 16:19:21.643146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.379 [2024-11-20 16:19:21.643212] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.379 [2024-11-20 16:19:21.643221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.379 [2024-11-20 16:19:21.643228] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.379 [2024-11-20 16:19:21.643234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.379 [2024-11-20 16:19:21.645233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.379 [2024-11-20 16:19:21.645429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.379 [2024-11-20 16:19:21.645565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.379 [2024-11-20 16:19:21.645565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:46.379 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:46.379 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:24:46.379 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:46.379 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:46.379 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:46.640 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.640 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:46.640 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.640 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:46.640 [2024-11-20 16:19:22.362042] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.640 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.640 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:46.641 Malloc0 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:46.641 [2024-11-20 16:19:22.438179] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:46.641 [ 00:24:46.641 { 00:24:46.641 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:46.641 "subtype": "Discovery", 00:24:46.641 "listen_addresses": [], 00:24:46.641 "allow_any_host": true, 00:24:46.641 "hosts": [] 00:24:46.641 }, 00:24:46.641 { 00:24:46.641 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.641 "subtype": "NVMe", 00:24:46.641 "listen_addresses": [ 00:24:46.641 { 00:24:46.641 "trtype": "TCP", 00:24:46.641 "adrfam": "IPv4", 00:24:46.641 "traddr": "10.0.0.2", 00:24:46.641 "trsvcid": "4420" 00:24:46.641 } 00:24:46.641 ], 00:24:46.641 "allow_any_host": true, 00:24:46.641 "hosts": [], 00:24:46.641 "serial_number": "SPDK00000000000001", 00:24:46.641 "model_number": "SPDK bdev Controller", 00:24:46.641 "max_namespaces": 2, 00:24:46.641 "min_cntlid": 1, 00:24:46.641 "max_cntlid": 65519, 00:24:46.641 "namespaces": [ 00:24:46.641 { 00:24:46.641 "nsid": 1, 00:24:46.641 "bdev_name": "Malloc0", 00:24:46.641 "name": "Malloc0", 00:24:46.641 "nguid": "04D10F12CD584861AA596996F67A7482", 00:24:46.641 "uuid": "04d10f12-cd58-4861-aa59-6996f67a7482" 00:24:46.641 } 00:24:46.641 ] 00:24:46.641 } 00:24:46.641 ] 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1368305 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:24:46.641 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:46.903 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:46.903 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:24:46.903 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:24:46.903 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:46.903 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:46.903 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:46.903 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:24:46.903 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:46.903 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.903 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:46.903 Malloc1 00:24:46.903 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.903 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:46.903 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.903 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:47.165 Asynchronous Event Request test 00:24:47.165 Attaching to 10.0.0.2 00:24:47.165 Attached to 10.0.0.2 00:24:47.165 Registering asynchronous event callbacks... 00:24:47.165 Starting namespace attribute notice tests for all controllers... 00:24:47.165 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:47.165 aer_cb - Changed Namespace 00:24:47.165 Cleaning up... 00:24:47.165 [ 00:24:47.165 { 00:24:47.165 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:47.165 "subtype": "Discovery", 00:24:47.165 "listen_addresses": [], 00:24:47.165 "allow_any_host": true, 00:24:47.165 "hosts": [] 00:24:47.165 }, 00:24:47.165 { 00:24:47.165 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.165 "subtype": "NVMe", 00:24:47.165 "listen_addresses": [ 00:24:47.165 { 00:24:47.165 "trtype": "TCP", 00:24:47.165 "adrfam": "IPv4", 00:24:47.165 "traddr": "10.0.0.2", 00:24:47.165 "trsvcid": "4420" 00:24:47.165 } 00:24:47.165 ], 00:24:47.165 "allow_any_host": true, 00:24:47.165 "hosts": [], 00:24:47.165 "serial_number": "SPDK00000000000001", 00:24:47.165 "model_number": "SPDK bdev Controller", 00:24:47.165 "max_namespaces": 2, 00:24:47.165 "min_cntlid": 1, 00:24:47.165 "max_cntlid": 65519, 00:24:47.165 "namespaces": [ 00:24:47.165 { 00:24:47.165 "nsid": 1, 00:24:47.165 "bdev_name": "Malloc0", 00:24:47.165 "name": "Malloc0", 00:24:47.165 "nguid": "04D10F12CD584861AA596996F67A7482", 00:24:47.165 "uuid": "04d10f12-cd58-4861-aa59-6996f67a7482" 00:24:47.165 }, 00:24:47.165 { 00:24:47.165 "nsid": 2, 00:24:47.165 "bdev_name": "Malloc1", 00:24:47.165 "name": "Malloc1", 00:24:47.165 "nguid": "ED9D05912D784A9FA6BEB1AD84020E71", 00:24:47.165 "uuid": "ed9d0591-2d78-4a9f-a6be-b1ad84020e71" 00:24:47.165 } 00:24:47.165 ] 00:24:47.165 } 00:24:47.165 ] 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1368305 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:47.165 rmmod nvme_tcp 00:24:47.165 rmmod nvme_fabrics 00:24:47.165 rmmod nvme_keyring 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1368063 ']' 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1368063 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1368063 ']' 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1368063 00:24:47.165 16:19:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:24:47.165 16:19:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.165 16:19:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1368063 00:24:47.165 16:19:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:47.165 16:19:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:47.165 16:19:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1368063' 00:24:47.165 killing process with pid 1368063 00:24:47.165 16:19:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1368063 00:24:47.165 16:19:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1368063 00:24:47.427 16:19:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:47.427 16:19:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:47.427 16:19:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:47.427 16:19:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:47.427 16:19:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:24:47.427 16:19:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:47.427 16:19:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:24:47.427 16:19:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:47.427 16:19:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:47.427 16:19:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.427 16:19:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.427 16:19:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.972 16:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:49.972 00:24:49.972 real 0m11.648s 00:24:49.972 user 0m8.630s 00:24:49.972 sys 0m6.155s 00:24:49.972 16:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:49.972 16:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:49.972 ************************************ 00:24:49.972 END TEST nvmf_aer 00:24:49.972 ************************************ 00:24:49.972 16:19:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:49.972 16:19:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:49.972 16:19:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:49.972 16:19:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.972 ************************************ 00:24:49.972 START TEST nvmf_async_init 00:24:49.972 ************************************ 00:24:49.972 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:49.972 * Looking for test storage... 00:24:49.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:49.972 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:49.972 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:24:49.972 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:49.972 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:49.972 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:49.972 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:49.972 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:49.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.973 --rc genhtml_branch_coverage=1 00:24:49.973 --rc genhtml_function_coverage=1 00:24:49.973 --rc genhtml_legend=1 00:24:49.973 --rc geninfo_all_blocks=1 00:24:49.973 --rc geninfo_unexecuted_blocks=1 00:24:49.973 00:24:49.973 ' 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:49.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.973 --rc genhtml_branch_coverage=1 00:24:49.973 --rc genhtml_function_coverage=1 00:24:49.973 --rc genhtml_legend=1 00:24:49.973 --rc geninfo_all_blocks=1 00:24:49.973 --rc geninfo_unexecuted_blocks=1 00:24:49.973 00:24:49.973 ' 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:49.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.973 --rc genhtml_branch_coverage=1 00:24:49.973 --rc genhtml_function_coverage=1 00:24:49.973 --rc genhtml_legend=1 00:24:49.973 --rc geninfo_all_blocks=1 00:24:49.973 --rc geninfo_unexecuted_blocks=1 00:24:49.973 00:24:49.973 ' 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:49.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.973 --rc genhtml_branch_coverage=1 00:24:49.973 --rc genhtml_function_coverage=1 00:24:49.973 --rc genhtml_legend=1 00:24:49.973 --rc geninfo_all_blocks=1 00:24:49.973 --rc geninfo_unexecuted_blocks=1 00:24:49.973 00:24:49.973 ' 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:49.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=9188b26a7d174c86b6a0689e23b58be0 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:49.973 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:49.974 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:49.974 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.974 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.974 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.974 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:49.974 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:49.974 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:49.974 16:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:58.113 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:58.114 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:58.114 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:58.114 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:58.114 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.114 16:19:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:58.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:24:58.114 00:24:58.114 --- 10.0.0.2 ping statistics --- 00:24:58.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.114 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:24:58.114 00:24:58.114 --- 10.0.0.1 ping statistics --- 00:24:58.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.114 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1372636 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1372636 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1372636 ']' 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.114 16:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.114 [2024-11-20 16:19:33.258245] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:24:58.114 [2024-11-20 16:19:33.258311] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.114 [2024-11-20 16:19:33.341292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.114 [2024-11-20 16:19:33.392434] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.114 [2024-11-20 16:19:33.392485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.114 [2024-11-20 16:19:33.392493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.114 [2024-11-20 16:19:33.392500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.114 [2024-11-20 16:19:33.392506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.114 [2024-11-20 16:19:33.393256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.375 [2024-11-20 16:19:34.118747] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.375 null0 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 9188b26a7d174c86b6a0689e23b58be0 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.375 [2024-11-20 16:19:34.179117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.375 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.636 nvme0n1 00:24:58.636 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.636 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:58.636 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.636 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.636 [ 00:24:58.636 { 00:24:58.636 "name": "nvme0n1", 00:24:58.636 "aliases": [ 00:24:58.636 "9188b26a-7d17-4c86-b6a0-689e23b58be0" 00:24:58.636 ], 00:24:58.636 "product_name": "NVMe disk", 00:24:58.636 "block_size": 512, 00:24:58.636 "num_blocks": 2097152, 00:24:58.636 "uuid": "9188b26a-7d17-4c86-b6a0-689e23b58be0", 00:24:58.636 "numa_id": 0, 00:24:58.636 "assigned_rate_limits": { 00:24:58.636 "rw_ios_per_sec": 0, 00:24:58.636 "rw_mbytes_per_sec": 0, 00:24:58.636 "r_mbytes_per_sec": 0, 00:24:58.636 "w_mbytes_per_sec": 0 00:24:58.636 }, 00:24:58.637 "claimed": false, 00:24:58.637 "zoned": false, 00:24:58.637 "supported_io_types": { 00:24:58.637 "read": true, 00:24:58.637 "write": true, 00:24:58.637 "unmap": false, 00:24:58.637 "flush": true, 00:24:58.637 "reset": true, 00:24:58.637 "nvme_admin": true, 00:24:58.637 "nvme_io": true, 00:24:58.637 "nvme_io_md": false, 00:24:58.637 "write_zeroes": true, 00:24:58.637 "zcopy": false, 00:24:58.637 "get_zone_info": false, 00:24:58.637 "zone_management": false, 00:24:58.637 "zone_append": false, 00:24:58.637 "compare": true, 00:24:58.637 "compare_and_write": true, 00:24:58.637 "abort": true, 00:24:58.637 "seek_hole": false, 00:24:58.637 "seek_data": false, 00:24:58.637 "copy": true, 00:24:58.637 "nvme_iov_md": false 00:24:58.637 }, 00:24:58.637 "memory_domains": [ 00:24:58.637 { 00:24:58.637 "dma_device_id": "system", 00:24:58.637 "dma_device_type": 1 00:24:58.637 } 00:24:58.637 ], 00:24:58.637 "driver_specific": { 00:24:58.637 "nvme": [ 00:24:58.637 { 00:24:58.637 "trid": { 00:24:58.637 "trtype": "TCP", 00:24:58.637 "adrfam": "IPv4", 00:24:58.637 "traddr": "10.0.0.2", 00:24:58.637 "trsvcid": "4420", 00:24:58.637 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:58.637 }, 00:24:58.637 "ctrlr_data": { 00:24:58.637 "cntlid": 1, 00:24:58.637 "vendor_id": "0x8086", 00:24:58.637 "model_number": "SPDK bdev Controller", 00:24:58.637 "serial_number": "00000000000000000000", 00:24:58.637 "firmware_revision": "25.01", 00:24:58.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:58.637 "oacs": { 00:24:58.637 "security": 0, 00:24:58.637 "format": 0, 00:24:58.637 "firmware": 0, 00:24:58.637 "ns_manage": 0 00:24:58.637 }, 00:24:58.637 "multi_ctrlr": true, 00:24:58.637 "ana_reporting": false 00:24:58.637 }, 00:24:58.637 "vs": { 00:24:58.637 "nvme_version": "1.3" 00:24:58.637 }, 00:24:58.637 "ns_data": { 00:24:58.637 "id": 1, 00:24:58.637 "can_share": true 00:24:58.637 } 00:24:58.637 } 00:24:58.637 ], 00:24:58.637 "mp_policy": "active_passive" 00:24:58.637 } 00:24:58.637 } 00:24:58.637 ] 00:24:58.637 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.637 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:58.637 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.637 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.637 [2024-11-20 16:19:34.455571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:58.637 [2024-11-20 16:19:34.455656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229ace0 (9): Bad file descriptor 00:24:58.898 [2024-11-20 16:19:34.587272] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.898 [ 00:24:58.898 { 00:24:58.898 "name": "nvme0n1", 00:24:58.898 "aliases": [ 00:24:58.898 "9188b26a-7d17-4c86-b6a0-689e23b58be0" 00:24:58.898 ], 00:24:58.898 "product_name": "NVMe disk", 00:24:58.898 "block_size": 512, 00:24:58.898 "num_blocks": 2097152, 00:24:58.898 "uuid": "9188b26a-7d17-4c86-b6a0-689e23b58be0", 00:24:58.898 "numa_id": 0, 00:24:58.898 "assigned_rate_limits": { 00:24:58.898 "rw_ios_per_sec": 0, 00:24:58.898 "rw_mbytes_per_sec": 0, 00:24:58.898 "r_mbytes_per_sec": 0, 00:24:58.898 "w_mbytes_per_sec": 0 00:24:58.898 }, 00:24:58.898 "claimed": false, 00:24:58.898 "zoned": false, 00:24:58.898 "supported_io_types": { 00:24:58.898 "read": true, 00:24:58.898 "write": true, 00:24:58.898 "unmap": false, 00:24:58.898 "flush": true, 00:24:58.898 "reset": true, 00:24:58.898 "nvme_admin": true, 00:24:58.898 "nvme_io": true, 00:24:58.898 "nvme_io_md": false, 00:24:58.898 "write_zeroes": true, 00:24:58.898 "zcopy": false, 00:24:58.898 "get_zone_info": false, 00:24:58.898 "zone_management": false, 00:24:58.898 "zone_append": false, 00:24:58.898 "compare": true, 00:24:58.898 "compare_and_write": true, 00:24:58.898 "abort": true, 00:24:58.898 "seek_hole": false, 00:24:58.898 "seek_data": false, 00:24:58.898 "copy": true, 00:24:58.898 "nvme_iov_md": false 00:24:58.898 }, 00:24:58.898 "memory_domains": [ 00:24:58.898 { 00:24:58.898 "dma_device_id": "system", 00:24:58.898 "dma_device_type": 1 00:24:58.898 } 00:24:58.898 ], 00:24:58.898 "driver_specific": { 00:24:58.898 "nvme": [ 00:24:58.898 { 00:24:58.898 "trid": { 00:24:58.898 "trtype": "TCP", 00:24:58.898 "adrfam": "IPv4", 00:24:58.898 "traddr": "10.0.0.2", 00:24:58.898 "trsvcid": "4420", 00:24:58.898 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:58.898 }, 00:24:58.898 "ctrlr_data": { 00:24:58.898 "cntlid": 2, 00:24:58.898 "vendor_id": "0x8086", 00:24:58.898 "model_number": "SPDK bdev Controller", 00:24:58.898 "serial_number": "00000000000000000000", 00:24:58.898 "firmware_revision": "25.01", 00:24:58.898 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:58.898 "oacs": { 00:24:58.898 "security": 0, 00:24:58.898 "format": 0, 00:24:58.898 "firmware": 0, 00:24:58.898 "ns_manage": 0 00:24:58.898 }, 00:24:58.898 "multi_ctrlr": true, 00:24:58.898 "ana_reporting": false 00:24:58.898 }, 00:24:58.898 "vs": { 00:24:58.898 "nvme_version": "1.3" 00:24:58.898 }, 00:24:58.898 "ns_data": { 00:24:58.898 "id": 1, 00:24:58.898 "can_share": true 00:24:58.898 } 00:24:58.898 } 00:24:58.898 ], 00:24:58.898 "mp_policy": "active_passive" 00:24:58.898 } 00:24:58.898 } 00:24:58.898 ] 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.kceOBQKtMV 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.kceOBQKtMV 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.kceOBQKtMV 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.898 [2024-11-20 16:19:34.676252] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:58.898 [2024-11-20 16:19:34.676415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.898 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.899 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:58.899 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.899 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.899 [2024-11-20 16:19:34.700326] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:58.899 nvme0n1 00:24:58.899 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.899 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:58.899 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.899 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.899 [ 00:24:58.899 { 00:24:58.899 "name": "nvme0n1", 00:24:58.899 "aliases": [ 00:24:58.899 "9188b26a-7d17-4c86-b6a0-689e23b58be0" 00:24:58.899 ], 00:24:58.899 "product_name": "NVMe disk", 00:24:58.899 "block_size": 512, 00:24:58.899 "num_blocks": 2097152, 00:24:58.899 "uuid": "9188b26a-7d17-4c86-b6a0-689e23b58be0", 00:24:58.899 "numa_id": 0, 00:24:58.899 "assigned_rate_limits": { 00:24:58.899 "rw_ios_per_sec": 0, 00:24:58.899 "rw_mbytes_per_sec": 0, 00:24:58.899 "r_mbytes_per_sec": 0, 00:24:58.899 "w_mbytes_per_sec": 0 00:24:58.899 }, 00:24:58.899 "claimed": false, 00:24:58.899 "zoned": false, 00:24:58.899 "supported_io_types": { 00:24:58.899 "read": true, 00:24:58.899 "write": true, 00:24:58.899 "unmap": false, 00:24:58.899 "flush": true, 00:24:58.899 "reset": true, 00:24:58.899 "nvme_admin": true, 00:24:58.899 "nvme_io": true, 00:24:58.899 "nvme_io_md": false, 00:24:58.899 "write_zeroes": true, 00:24:58.899 "zcopy": false, 00:24:58.899 "get_zone_info": false, 00:24:58.899 "zone_management": false, 00:24:58.899 "zone_append": false, 00:24:58.899 "compare": true, 00:24:58.899 "compare_and_write": true, 00:24:58.899 "abort": true, 00:24:58.899 "seek_hole": false, 00:24:58.899 "seek_data": false, 00:24:58.899 "copy": true, 00:24:58.899 "nvme_iov_md": false 00:24:58.899 }, 00:24:58.899 "memory_domains": [ 00:24:58.899 { 00:24:58.899 "dma_device_id": "system", 00:24:58.899 "dma_device_type": 1 00:24:58.899 } 00:24:58.899 ], 00:24:58.899 "driver_specific": { 00:24:58.899 "nvme": [ 00:24:58.899 { 00:24:58.899 "trid": { 00:24:58.899 "trtype": "TCP", 00:24:58.899 "adrfam": "IPv4", 00:24:58.899 "traddr": "10.0.0.2", 00:24:58.899 "trsvcid": "4421", 00:24:58.899 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:58.899 }, 00:24:58.899 "ctrlr_data": { 00:24:58.899 "cntlid": 3, 00:24:58.899 "vendor_id": "0x8086", 00:24:58.899 "model_number": "SPDK bdev Controller", 00:24:58.899 "serial_number": "00000000000000000000", 00:24:58.899 "firmware_revision": "25.01", 00:24:58.899 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:58.899 "oacs": { 00:24:58.899 "security": 0, 00:24:58.899 "format": 0, 00:24:58.899 "firmware": 0, 00:24:58.899 "ns_manage": 0 00:24:58.899 }, 00:24:58.899 "multi_ctrlr": true, 00:24:58.899 "ana_reporting": false 00:24:58.899 }, 00:24:58.899 "vs": { 00:24:58.899 "nvme_version": "1.3" 00:24:58.899 }, 00:24:58.899 "ns_data": { 00:24:58.899 "id": 1, 00:24:58.899 "can_share": true 00:24:58.899 } 00:24:58.899 } 00:24:58.899 ], 00:24:58.899 "mp_policy": "active_passive" 00:24:58.899 } 00:24:58.899 } 00:24:58.899 ] 00:24:58.899 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.899 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.899 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.899 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:58.899 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.899 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.kceOBQKtMV 00:24:58.899 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:58.899 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:58.899 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:58.899 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:58.899 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:58.899 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:58.899 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:58.899 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:58.899 rmmod nvme_tcp 00:24:59.160 rmmod nvme_fabrics 00:24:59.160 rmmod nvme_keyring 00:24:59.160 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:59.160 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:59.160 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:59.160 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1372636 ']' 00:24:59.160 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1372636 00:24:59.160 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1372636 ']' 00:24:59.160 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1372636 00:24:59.160 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:24:59.160 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:59.160 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1372636 00:24:59.160 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:59.160 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:59.160 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1372636' 00:24:59.160 killing process with pid 1372636 00:24:59.160 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1372636 00:24:59.160 16:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1372636 00:24:59.421 16:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:59.421 16:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:59.421 16:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:59.421 16:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:59.421 16:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:59.421 16:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:59.421 16:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:59.421 16:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:59.421 16:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:59.421 16:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.421 16:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.421 16:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.331 16:19:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:01.332 00:25:01.332 real 0m11.787s 00:25:01.332 user 0m4.294s 00:25:01.332 sys 0m6.065s 00:25:01.332 16:19:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:01.332 16:19:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:01.332 ************************************ 00:25:01.332 END TEST nvmf_async_init 00:25:01.332 ************************************ 00:25:01.332 16:19:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:01.332 16:19:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:01.332 16:19:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:01.332 16:19:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.595 ************************************ 00:25:01.595 START TEST dma 00:25:01.595 ************************************ 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:01.595 * Looking for test storage... 00:25:01.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:01.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.595 --rc genhtml_branch_coverage=1 00:25:01.595 --rc genhtml_function_coverage=1 00:25:01.595 --rc genhtml_legend=1 00:25:01.595 --rc geninfo_all_blocks=1 00:25:01.595 --rc geninfo_unexecuted_blocks=1 00:25:01.595 00:25:01.595 ' 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:01.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.595 --rc genhtml_branch_coverage=1 00:25:01.595 --rc genhtml_function_coverage=1 00:25:01.595 --rc genhtml_legend=1 00:25:01.595 --rc geninfo_all_blocks=1 00:25:01.595 --rc geninfo_unexecuted_blocks=1 00:25:01.595 00:25:01.595 ' 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:01.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.595 --rc genhtml_branch_coverage=1 00:25:01.595 --rc genhtml_function_coverage=1 00:25:01.595 --rc genhtml_legend=1 00:25:01.595 --rc geninfo_all_blocks=1 00:25:01.595 --rc geninfo_unexecuted_blocks=1 00:25:01.595 00:25:01.595 ' 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:01.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.595 --rc genhtml_branch_coverage=1 00:25:01.595 --rc genhtml_function_coverage=1 00:25:01.595 --rc genhtml_legend=1 00:25:01.595 --rc geninfo_all_blocks=1 00:25:01.595 --rc geninfo_unexecuted_blocks=1 00:25:01.595 00:25:01.595 ' 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:01.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:25:01.595 00:25:01.595 real 0m0.240s 00:25:01.595 user 0m0.134s 00:25:01.595 sys 0m0.122s 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:01.595 16:19:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:01.596 ************************************ 00:25:01.596 END TEST dma 00:25:01.596 ************************************ 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.857 ************************************ 00:25:01.857 START TEST nvmf_identify 00:25:01.857 ************************************ 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:01.857 * Looking for test storage... 00:25:01.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:01.857 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:02.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.119 --rc genhtml_branch_coverage=1 00:25:02.119 --rc genhtml_function_coverage=1 00:25:02.119 --rc genhtml_legend=1 00:25:02.119 --rc geninfo_all_blocks=1 00:25:02.119 --rc geninfo_unexecuted_blocks=1 00:25:02.119 00:25:02.119 ' 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:02.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.119 --rc genhtml_branch_coverage=1 00:25:02.119 --rc genhtml_function_coverage=1 00:25:02.119 --rc genhtml_legend=1 00:25:02.119 --rc geninfo_all_blocks=1 00:25:02.119 --rc geninfo_unexecuted_blocks=1 00:25:02.119 00:25:02.119 ' 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:02.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.119 --rc genhtml_branch_coverage=1 00:25:02.119 --rc genhtml_function_coverage=1 00:25:02.119 --rc genhtml_legend=1 00:25:02.119 --rc geninfo_all_blocks=1 00:25:02.119 --rc geninfo_unexecuted_blocks=1 00:25:02.119 00:25:02.119 ' 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:02.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.119 --rc genhtml_branch_coverage=1 00:25:02.119 --rc genhtml_function_coverage=1 00:25:02.119 --rc genhtml_legend=1 00:25:02.119 --rc geninfo_all_blocks=1 00:25:02.119 --rc geninfo_unexecuted_blocks=1 00:25:02.119 00:25:02.119 ' 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:02.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.119 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.120 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:02.120 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:02.120 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:25:02.120 16:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:10.260 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:10.260 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:25:10.260 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:10.260 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:10.260 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:10.260 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:10.260 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:10.260 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:10.261 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:10.261 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:10.261 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:10.261 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:10.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:25:10.261 00:25:10.261 --- 10.0.0.2 ping statistics --- 00:25:10.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.261 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:10.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:25:10.261 00:25:10.261 --- 10.0.0.1 ping statistics --- 00:25:10.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.261 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:10.261 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1377267 00:25:10.262 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:10.262 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:10.262 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1377267 00:25:10.262 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1377267 ']' 00:25:10.262 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.262 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:10.262 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.262 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:10.262 16:19:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:10.262 [2024-11-20 16:19:45.435991] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:25:10.262 [2024-11-20 16:19:45.436056] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.262 [2024-11-20 16:19:45.539350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:10.262 [2024-11-20 16:19:45.593987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:10.262 [2024-11-20 16:19:45.594043] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:10.262 [2024-11-20 16:19:45.594052] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:10.262 [2024-11-20 16:19:45.594060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:10.262 [2024-11-20 16:19:45.594066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:10.262 [2024-11-20 16:19:45.596154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.262 [2024-11-20 16:19:45.596318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:10.262 [2024-11-20 16:19:45.596519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:10.262 [2024-11-20 16:19:45.596519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:10.523 [2024-11-20 16:19:46.272480] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:10.523 Malloc0 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:10.523 [2024-11-20 16:19:46.394497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:10.523 [ 00:25:10.523 { 00:25:10.523 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:10.523 "subtype": "Discovery", 00:25:10.523 "listen_addresses": [ 00:25:10.523 { 00:25:10.523 "trtype": "TCP", 00:25:10.523 "adrfam": "IPv4", 00:25:10.523 "traddr": "10.0.0.2", 00:25:10.523 "trsvcid": "4420" 00:25:10.523 } 00:25:10.523 ], 00:25:10.523 "allow_any_host": true, 00:25:10.523 "hosts": [] 00:25:10.523 }, 00:25:10.523 { 00:25:10.523 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:10.523 "subtype": "NVMe", 00:25:10.523 "listen_addresses": [ 00:25:10.523 { 00:25:10.523 "trtype": "TCP", 00:25:10.523 "adrfam": "IPv4", 00:25:10.523 "traddr": "10.0.0.2", 00:25:10.523 "trsvcid": "4420" 00:25:10.523 } 00:25:10.523 ], 00:25:10.523 "allow_any_host": true, 00:25:10.523 "hosts": [], 00:25:10.523 "serial_number": "SPDK00000000000001", 00:25:10.523 "model_number": "SPDK bdev Controller", 00:25:10.523 "max_namespaces": 32, 00:25:10.523 "min_cntlid": 1, 00:25:10.523 "max_cntlid": 65519, 00:25:10.523 "namespaces": [ 00:25:10.523 { 00:25:10.523 "nsid": 1, 00:25:10.523 "bdev_name": "Malloc0", 00:25:10.523 "name": "Malloc0", 00:25:10.523 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:10.523 "eui64": "ABCDEF0123456789", 00:25:10.523 "uuid": "b1c24a89-3f29-402a-be4f-0978bcd8ac73" 00:25:10.523 } 00:25:10.523 ] 00:25:10.523 } 00:25:10.523 ] 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.523 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:10.788 [2024-11-20 16:19:46.459507] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:25:10.788 [2024-11-20 16:19:46.459556] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1377400 ] 00:25:10.788 [2024-11-20 16:19:46.516824] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:25:10.788 [2024-11-20 16:19:46.516914] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:10.788 [2024-11-20 16:19:46.516920] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:10.788 [2024-11-20 16:19:46.516941] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:10.788 [2024-11-20 16:19:46.516955] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:10.788 [2024-11-20 16:19:46.520633] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:25:10.788 [2024-11-20 16:19:46.520686] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x5b2690 0 00:25:10.788 [2024-11-20 16:19:46.528174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:10.788 [2024-11-20 16:19:46.528194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:10.788 [2024-11-20 16:19:46.528200] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:10.788 [2024-11-20 16:19:46.528203] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:10.788 [2024-11-20 16:19:46.528253] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.788 [2024-11-20 16:19:46.528260] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.788 [2024-11-20 16:19:46.528264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5b2690) 00:25:10.788 [2024-11-20 16:19:46.528283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:10.788 [2024-11-20 16:19:46.528310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614100, cid 0, qid 0 00:25:10.788 [2024-11-20 16:19:46.536171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.788 [2024-11-20 16:19:46.536181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.788 [2024-11-20 16:19:46.536185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.788 [2024-11-20 16:19:46.536190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614100) on tqpair=0x5b2690 00:25:10.788 [2024-11-20 16:19:46.536202] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:10.788 [2024-11-20 16:19:46.536211] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:25:10.788 [2024-11-20 16:19:46.536217] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:25:10.788 [2024-11-20 16:19:46.536235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.788 [2024-11-20 16:19:46.536239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.788 [2024-11-20 16:19:46.536243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5b2690) 00:25:10.788 [2024-11-20 16:19:46.536252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.788 [2024-11-20 16:19:46.536269] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614100, cid 0, qid 0 00:25:10.788 [2024-11-20 16:19:46.536505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.788 [2024-11-20 16:19:46.536512] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.788 [2024-11-20 16:19:46.536516] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.788 [2024-11-20 16:19:46.536520] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614100) on tqpair=0x5b2690 00:25:10.788 [2024-11-20 16:19:46.536526] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:25:10.788 [2024-11-20 16:19:46.536534] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:25:10.788 [2024-11-20 16:19:46.536542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.788 [2024-11-20 16:19:46.536545] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.788 [2024-11-20 16:19:46.536549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5b2690) 00:25:10.788 [2024-11-20 16:19:46.536556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.788 [2024-11-20 16:19:46.536567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614100, cid 0, qid 0 00:25:10.788 [2024-11-20 16:19:46.536764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.788 [2024-11-20 16:19:46.536775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.788 [2024-11-20 16:19:46.536779] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.788 [2024-11-20 16:19:46.536783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614100) on tqpair=0x5b2690 00:25:10.788 [2024-11-20 16:19:46.536789] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:25:10.788 [2024-11-20 16:19:46.536798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:10.788 [2024-11-20 16:19:46.536804] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.788 [2024-11-20 16:19:46.536808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.788 [2024-11-20 16:19:46.536812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5b2690) 00:25:10.788 [2024-11-20 16:19:46.536819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.788 [2024-11-20 16:19:46.536829] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614100, cid 0, qid 0 00:25:10.788 [2024-11-20 16:19:46.537012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.788 [2024-11-20 16:19:46.537018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.788 [2024-11-20 16:19:46.537022] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.788 [2024-11-20 16:19:46.537026] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614100) on tqpair=0x5b2690 00:25:10.788 [2024-11-20 16:19:46.537031] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:10.788 [2024-11-20 16:19:46.537041] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.788 [2024-11-20 16:19:46.537045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.788 [2024-11-20 16:19:46.537049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5b2690) 00:25:10.788 [2024-11-20 16:19:46.537056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.788 [2024-11-20 16:19:46.537066] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614100, cid 0, qid 0 00:25:10.788 [2024-11-20 16:19:46.537240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.788 [2024-11-20 16:19:46.537247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.788 [2024-11-20 16:19:46.537251] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.788 [2024-11-20 16:19:46.537255] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614100) on tqpair=0x5b2690 00:25:10.788 [2024-11-20 16:19:46.537260] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:10.788 [2024-11-20 16:19:46.537265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:10.788 [2024-11-20 16:19:46.537273] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:10.788 [2024-11-20 16:19:46.537386] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:25:10.788 [2024-11-20 16:19:46.537391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:10.788 [2024-11-20 16:19:46.537402] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.788 [2024-11-20 16:19:46.537406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.788 [2024-11-20 16:19:46.537410] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5b2690) 00:25:10.788 [2024-11-20 16:19:46.537416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.788 [2024-11-20 16:19:46.537431] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614100, cid 0, qid 0 00:25:10.788 [2024-11-20 16:19:46.537642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.788 [2024-11-20 16:19:46.537649] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.788 [2024-11-20 16:19:46.537652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.788 [2024-11-20 16:19:46.537656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614100) on tqpair=0x5b2690 00:25:10.789 [2024-11-20 16:19:46.537661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:10.789 [2024-11-20 16:19:46.537671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.537675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.537678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5b2690) 00:25:10.789 [2024-11-20 16:19:46.537685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.789 [2024-11-20 16:19:46.537695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614100, cid 0, qid 0 00:25:10.789 [2024-11-20 16:19:46.537903] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.789 [2024-11-20 16:19:46.537910] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.789 [2024-11-20 16:19:46.537913] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.537917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614100) on tqpair=0x5b2690 00:25:10.789 [2024-11-20 16:19:46.537922] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:10.789 [2024-11-20 16:19:46.537927] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:10.789 [2024-11-20 16:19:46.537935] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:25:10.789 [2024-11-20 16:19:46.537951] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:10.789 [2024-11-20 16:19:46.537962] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.537965] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5b2690) 00:25:10.789 [2024-11-20 16:19:46.537972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.789 [2024-11-20 16:19:46.537983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614100, cid 0, qid 0 00:25:10.789 [2024-11-20 16:19:46.538231] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:10.789 [2024-11-20 16:19:46.538238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:10.789 [2024-11-20 16:19:46.538242] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.538247] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5b2690): datao=0, datal=4096, cccid=0 00:25:10.789 [2024-11-20 16:19:46.538252] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x614100) on tqpair(0x5b2690): expected_datao=0, payload_size=4096 00:25:10.789 [2024-11-20 16:19:46.538257] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.538270] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.538275] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.584169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.789 [2024-11-20 16:19:46.584184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.789 [2024-11-20 16:19:46.584193] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.584198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614100) on tqpair=0x5b2690 00:25:10.789 [2024-11-20 16:19:46.584209] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:25:10.789 [2024-11-20 16:19:46.584216] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:25:10.789 [2024-11-20 16:19:46.584223] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:25:10.789 [2024-11-20 16:19:46.584233] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:25:10.789 [2024-11-20 16:19:46.584238] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:25:10.789 [2024-11-20 16:19:46.584244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:25:10.789 [2024-11-20 16:19:46.584256] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:10.789 [2024-11-20 16:19:46.584265] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.584269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.584272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5b2690) 00:25:10.789 [2024-11-20 16:19:46.584281] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:10.789 [2024-11-20 16:19:46.584294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614100, cid 0, qid 0 00:25:10.789 [2024-11-20 16:19:46.584507] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.789 [2024-11-20 16:19:46.584514] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.789 [2024-11-20 16:19:46.584518] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.584521] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614100) on tqpair=0x5b2690 00:25:10.789 [2024-11-20 16:19:46.584531] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.584535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.584539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5b2690) 00:25:10.789 [2024-11-20 16:19:46.584548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.789 [2024-11-20 16:19:46.584558] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.584563] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.584568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x5b2690) 00:25:10.789 [2024-11-20 16:19:46.584576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.789 [2024-11-20 16:19:46.584583] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.584586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.584590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x5b2690) 00:25:10.789 [2024-11-20 16:19:46.584596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.789 [2024-11-20 16:19:46.584604] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.584607] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.584611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b2690) 00:25:10.789 [2024-11-20 16:19:46.584617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.789 [2024-11-20 16:19:46.584627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:10.789 [2024-11-20 16:19:46.584636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:10.789 [2024-11-20 16:19:46.584645] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.584651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5b2690) 00:25:10.789 [2024-11-20 16:19:46.584659] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.789 [2024-11-20 16:19:46.584675] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614100, cid 0, qid 0 00:25:10.789 [2024-11-20 16:19:46.584684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614280, cid 1, qid 0 00:25:10.789 [2024-11-20 16:19:46.584689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614400, cid 2, qid 0 00:25:10.789 [2024-11-20 16:19:46.584694] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614580, cid 3, qid 0 00:25:10.789 [2024-11-20 16:19:46.584699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614700, cid 4, qid 0 00:25:10.789 [2024-11-20 16:19:46.584964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.789 [2024-11-20 16:19:46.584972] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.789 [2024-11-20 16:19:46.584975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.584981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614700) on tqpair=0x5b2690 00:25:10.789 [2024-11-20 16:19:46.584992] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:25:10.789 [2024-11-20 16:19:46.584998] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:25:10.789 [2024-11-20 16:19:46.585010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.789 [2024-11-20 16:19:46.585014] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5b2690) 00:25:10.789 [2024-11-20 16:19:46.585021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.790 [2024-11-20 16:19:46.585031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614700, cid 4, qid 0 00:25:10.790 [2024-11-20 16:19:46.585257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:10.790 [2024-11-20 16:19:46.585265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:10.790 [2024-11-20 16:19:46.585269] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.585273] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5b2690): datao=0, datal=4096, cccid=4 00:25:10.790 [2024-11-20 16:19:46.585278] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x614700) on tqpair(0x5b2690): expected_datao=0, payload_size=4096 00:25:10.790 [2024-11-20 16:19:46.585282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.585289] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.585293] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.585479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.790 [2024-11-20 16:19:46.585486] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.790 [2024-11-20 16:19:46.585489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.585493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614700) on tqpair=0x5b2690 00:25:10.790 [2024-11-20 16:19:46.585507] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:25:10.790 [2024-11-20 16:19:46.585539] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.585543] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5b2690) 00:25:10.790 [2024-11-20 16:19:46.585550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.790 [2024-11-20 16:19:46.585557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.585561] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.585565] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5b2690) 00:25:10.790 [2024-11-20 16:19:46.585571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.790 [2024-11-20 16:19:46.585586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614700, cid 4, qid 0 00:25:10.790 [2024-11-20 16:19:46.585591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614880, cid 5, qid 0 00:25:10.790 [2024-11-20 16:19:46.585859] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:10.790 [2024-11-20 16:19:46.585866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:10.790 [2024-11-20 16:19:46.585870] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.585873] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5b2690): datao=0, datal=1024, cccid=4 00:25:10.790 [2024-11-20 16:19:46.585878] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x614700) on tqpair(0x5b2690): expected_datao=0, payload_size=1024 00:25:10.790 [2024-11-20 16:19:46.585882] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.585889] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.585893] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.585899] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.790 [2024-11-20 16:19:46.585904] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.790 [2024-11-20 16:19:46.585908] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.585912] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614880) on tqpair=0x5b2690 00:25:10.790 [2024-11-20 16:19:46.626372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.790 [2024-11-20 16:19:46.626386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.790 [2024-11-20 16:19:46.626389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.626393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614700) on tqpair=0x5b2690 00:25:10.790 [2024-11-20 16:19:46.626408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.626412] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5b2690) 00:25:10.790 [2024-11-20 16:19:46.626420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.790 [2024-11-20 16:19:46.626436] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614700, cid 4, qid 0 00:25:10.790 [2024-11-20 16:19:46.626714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:10.790 [2024-11-20 16:19:46.626721] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:10.790 [2024-11-20 16:19:46.626725] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.626729] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5b2690): datao=0, datal=3072, cccid=4 00:25:10.790 [2024-11-20 16:19:46.626733] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x614700) on tqpair(0x5b2690): expected_datao=0, payload_size=3072 00:25:10.790 [2024-11-20 16:19:46.626738] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.626745] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.626753] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.626875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.790 [2024-11-20 16:19:46.626882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.790 [2024-11-20 16:19:46.626886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.626890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614700) on tqpair=0x5b2690 00:25:10.790 [2024-11-20 16:19:46.626899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.626903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5b2690) 00:25:10.790 [2024-11-20 16:19:46.626909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.790 [2024-11-20 16:19:46.626923] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614700, cid 4, qid 0 00:25:10.790 [2024-11-20 16:19:46.627167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:10.790 [2024-11-20 16:19:46.627174] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:10.790 [2024-11-20 16:19:46.627178] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.627182] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5b2690): datao=0, datal=8, cccid=4 00:25:10.790 [2024-11-20 16:19:46.627186] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x614700) on tqpair(0x5b2690): expected_datao=0, payload_size=8 00:25:10.790 [2024-11-20 16:19:46.627191] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.627197] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.627201] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.672173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.790 [2024-11-20 16:19:46.672190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.790 [2024-11-20 16:19:46.672195] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.790 [2024-11-20 16:19:46.672199] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614700) on tqpair=0x5b2690 00:25:10.790 ===================================================== 00:25:10.790 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:10.790 ===================================================== 00:25:10.790 Controller Capabilities/Features 00:25:10.790 ================================ 00:25:10.790 Vendor ID: 0000 00:25:10.790 Subsystem Vendor ID: 0000 00:25:10.790 Serial Number: .................... 00:25:10.790 Model Number: ........................................ 00:25:10.790 Firmware Version: 25.01 00:25:10.790 Recommended Arb Burst: 0 00:25:10.790 IEEE OUI Identifier: 00 00 00 00:25:10.790 Multi-path I/O 00:25:10.790 May have multiple subsystem ports: No 00:25:10.790 May have multiple controllers: No 00:25:10.790 Associated with SR-IOV VF: No 00:25:10.790 Max Data Transfer Size: 131072 00:25:10.790 Max Number of Namespaces: 0 00:25:10.790 Max Number of I/O Queues: 1024 00:25:10.790 NVMe Specification Version (VS): 1.3 00:25:10.790 NVMe Specification Version (Identify): 1.3 00:25:10.790 Maximum Queue Entries: 128 00:25:10.790 Contiguous Queues Required: Yes 00:25:10.790 Arbitration Mechanisms Supported 00:25:10.790 Weighted Round Robin: Not Supported 00:25:10.790 Vendor Specific: Not Supported 00:25:10.790 Reset Timeout: 15000 ms 00:25:10.790 Doorbell Stride: 4 bytes 00:25:10.790 NVM Subsystem Reset: Not Supported 00:25:10.790 Command Sets Supported 00:25:10.790 NVM Command Set: Supported 00:25:10.790 Boot Partition: Not Supported 00:25:10.790 Memory Page Size Minimum: 4096 bytes 00:25:10.790 Memory Page Size Maximum: 4096 bytes 00:25:10.790 Persistent Memory Region: Not Supported 00:25:10.790 Optional Asynchronous Events Supported 00:25:10.791 Namespace Attribute Notices: Not Supported 00:25:10.791 Firmware Activation Notices: Not Supported 00:25:10.791 ANA Change Notices: Not Supported 00:25:10.791 PLE Aggregate Log Change Notices: Not Supported 00:25:10.791 LBA Status Info Alert Notices: Not Supported 00:25:10.791 EGE Aggregate Log Change Notices: Not Supported 00:25:10.791 Normal NVM Subsystem Shutdown event: Not Supported 00:25:10.791 Zone Descriptor Change Notices: Not Supported 00:25:10.791 Discovery Log Change Notices: Supported 00:25:10.791 Controller Attributes 00:25:10.791 128-bit Host Identifier: Not Supported 00:25:10.791 Non-Operational Permissive Mode: Not Supported 00:25:10.791 NVM Sets: Not Supported 00:25:10.791 Read Recovery Levels: Not Supported 00:25:10.791 Endurance Groups: Not Supported 00:25:10.791 Predictable Latency Mode: Not Supported 00:25:10.791 Traffic Based Keep ALive: Not Supported 00:25:10.791 Namespace Granularity: Not Supported 00:25:10.791 SQ Associations: Not Supported 00:25:10.791 UUID List: Not Supported 00:25:10.791 Multi-Domain Subsystem: Not Supported 00:25:10.791 Fixed Capacity Management: Not Supported 00:25:10.791 Variable Capacity Management: Not Supported 00:25:10.791 Delete Endurance Group: Not Supported 00:25:10.791 Delete NVM Set: Not Supported 00:25:10.791 Extended LBA Formats Supported: Not Supported 00:25:10.791 Flexible Data Placement Supported: Not Supported 00:25:10.791 00:25:10.791 Controller Memory Buffer Support 00:25:10.791 ================================ 00:25:10.791 Supported: No 00:25:10.791 00:25:10.791 Persistent Memory Region Support 00:25:10.791 ================================ 00:25:10.791 Supported: No 00:25:10.791 00:25:10.791 Admin Command Set Attributes 00:25:10.791 ============================ 00:25:10.791 Security Send/Receive: Not Supported 00:25:10.791 Format NVM: Not Supported 00:25:10.791 Firmware Activate/Download: Not Supported 00:25:10.791 Namespace Management: Not Supported 00:25:10.791 Device Self-Test: Not Supported 00:25:10.791 Directives: Not Supported 00:25:10.791 NVMe-MI: Not Supported 00:25:10.791 Virtualization Management: Not Supported 00:25:10.791 Doorbell Buffer Config: Not Supported 00:25:10.791 Get LBA Status Capability: Not Supported 00:25:10.791 Command & Feature Lockdown Capability: Not Supported 00:25:10.791 Abort Command Limit: 1 00:25:10.791 Async Event Request Limit: 4 00:25:10.791 Number of Firmware Slots: N/A 00:25:10.791 Firmware Slot 1 Read-Only: N/A 00:25:10.791 Firmware Activation Without Reset: N/A 00:25:10.791 Multiple Update Detection Support: N/A 00:25:10.791 Firmware Update Granularity: No Information Provided 00:25:10.791 Per-Namespace SMART Log: No 00:25:10.791 Asymmetric Namespace Access Log Page: Not Supported 00:25:10.791 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:10.791 Command Effects Log Page: Not Supported 00:25:10.791 Get Log Page Extended Data: Supported 00:25:10.791 Telemetry Log Pages: Not Supported 00:25:10.791 Persistent Event Log Pages: Not Supported 00:25:10.791 Supported Log Pages Log Page: May Support 00:25:10.791 Commands Supported & Effects Log Page: Not Supported 00:25:10.791 Feature Identifiers & Effects Log Page:May Support 00:25:10.791 NVMe-MI Commands & Effects Log Page: May Support 00:25:10.791 Data Area 4 for Telemetry Log: Not Supported 00:25:10.791 Error Log Page Entries Supported: 128 00:25:10.791 Keep Alive: Not Supported 00:25:10.791 00:25:10.791 NVM Command Set Attributes 00:25:10.791 ========================== 00:25:10.791 Submission Queue Entry Size 00:25:10.791 Max: 1 00:25:10.791 Min: 1 00:25:10.791 Completion Queue Entry Size 00:25:10.791 Max: 1 00:25:10.791 Min: 1 00:25:10.791 Number of Namespaces: 0 00:25:10.791 Compare Command: Not Supported 00:25:10.791 Write Uncorrectable Command: Not Supported 00:25:10.791 Dataset Management Command: Not Supported 00:25:10.791 Write Zeroes Command: Not Supported 00:25:10.791 Set Features Save Field: Not Supported 00:25:10.791 Reservations: Not Supported 00:25:10.791 Timestamp: Not Supported 00:25:10.791 Copy: Not Supported 00:25:10.791 Volatile Write Cache: Not Present 00:25:10.791 Atomic Write Unit (Normal): 1 00:25:10.791 Atomic Write Unit (PFail): 1 00:25:10.791 Atomic Compare & Write Unit: 1 00:25:10.791 Fused Compare & Write: Supported 00:25:10.791 Scatter-Gather List 00:25:10.791 SGL Command Set: Supported 00:25:10.791 SGL Keyed: Supported 00:25:10.791 SGL Bit Bucket Descriptor: Not Supported 00:25:10.791 SGL Metadata Pointer: Not Supported 00:25:10.791 Oversized SGL: Not Supported 00:25:10.791 SGL Metadata Address: Not Supported 00:25:10.791 SGL Offset: Supported 00:25:10.791 Transport SGL Data Block: Not Supported 00:25:10.791 Replay Protected Memory Block: Not Supported 00:25:10.791 00:25:10.791 Firmware Slot Information 00:25:10.791 ========================= 00:25:10.791 Active slot: 0 00:25:10.791 00:25:10.791 00:25:10.791 Error Log 00:25:10.791 ========= 00:25:10.791 00:25:10.791 Active Namespaces 00:25:10.791 ================= 00:25:10.791 Discovery Log Page 00:25:10.791 ================== 00:25:10.791 Generation Counter: 2 00:25:10.791 Number of Records: 2 00:25:10.791 Record Format: 0 00:25:10.791 00:25:10.791 Discovery Log Entry 0 00:25:10.791 ---------------------- 00:25:10.791 Transport Type: 3 (TCP) 00:25:10.791 Address Family: 1 (IPv4) 00:25:10.791 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:10.791 Entry Flags: 00:25:10.791 Duplicate Returned Information: 1 00:25:10.791 Explicit Persistent Connection Support for Discovery: 1 00:25:10.791 Transport Requirements: 00:25:10.791 Secure Channel: Not Required 00:25:10.791 Port ID: 0 (0x0000) 00:25:10.791 Controller ID: 65535 (0xffff) 00:25:10.791 Admin Max SQ Size: 128 00:25:10.791 Transport Service Identifier: 4420 00:25:10.791 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:10.791 Transport Address: 10.0.0.2 00:25:10.791 Discovery Log Entry 1 00:25:10.791 ---------------------- 00:25:10.791 Transport Type: 3 (TCP) 00:25:10.791 Address Family: 1 (IPv4) 00:25:10.791 Subsystem Type: 2 (NVM Subsystem) 00:25:10.791 Entry Flags: 00:25:10.791 Duplicate Returned Information: 0 00:25:10.791 Explicit Persistent Connection Support for Discovery: 0 00:25:10.791 Transport Requirements: 00:25:10.791 Secure Channel: Not Required 00:25:10.791 Port ID: 0 (0x0000) 00:25:10.791 Controller ID: 65535 (0xffff) 00:25:10.791 Admin Max SQ Size: 128 00:25:10.791 Transport Service Identifier: 4420 00:25:10.791 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:10.791 Transport Address: 10.0.0.2 [2024-11-20 16:19:46.672310] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:25:10.791 [2024-11-20 16:19:46.672324] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614100) on tqpair=0x5b2690 00:25:10.791 [2024-11-20 16:19:46.672332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.791 [2024-11-20 16:19:46.672339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614280) on tqpair=0x5b2690 00:25:10.791 [2024-11-20 16:19:46.672344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.791 [2024-11-20 16:19:46.672351] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614400) on tqpair=0x5b2690 00:25:10.791 [2024-11-20 16:19:46.672358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.791 [2024-11-20 16:19:46.672364] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614580) on tqpair=0x5b2690 00:25:10.792 [2024-11-20 16:19:46.672369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.792 [2024-11-20 16:19:46.672381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.672385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.672392] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b2690) 00:25:10.792 [2024-11-20 16:19:46.672402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.792 [2024-11-20 16:19:46.672422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614580, cid 3, qid 0 00:25:10.792 [2024-11-20 16:19:46.672644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.792 [2024-11-20 16:19:46.672654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.792 [2024-11-20 16:19:46.672658] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.672664] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614580) on tqpair=0x5b2690 00:25:10.792 [2024-11-20 16:19:46.672672] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.672676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.672680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b2690) 00:25:10.792 [2024-11-20 16:19:46.672687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.792 [2024-11-20 16:19:46.672701] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614580, cid 3, qid 0 00:25:10.792 [2024-11-20 16:19:46.672994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.792 [2024-11-20 16:19:46.673001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.792 [2024-11-20 16:19:46.673005] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.673009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614580) on tqpair=0x5b2690 00:25:10.792 [2024-11-20 16:19:46.673015] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:25:10.792 [2024-11-20 16:19:46.673021] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:25:10.792 [2024-11-20 16:19:46.673031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.673035] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.673038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b2690) 00:25:10.792 [2024-11-20 16:19:46.673045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.792 [2024-11-20 16:19:46.673055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614580, cid 3, qid 0 00:25:10.792 [2024-11-20 16:19:46.673240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.792 [2024-11-20 16:19:46.673248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.792 [2024-11-20 16:19:46.673252] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.673255] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614580) on tqpair=0x5b2690 00:25:10.792 [2024-11-20 16:19:46.673266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.673270] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.673274] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b2690) 00:25:10.792 [2024-11-20 16:19:46.673281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.792 [2024-11-20 16:19:46.673292] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614580, cid 3, qid 0 00:25:10.792 [2024-11-20 16:19:46.673496] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.792 [2024-11-20 16:19:46.673502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.792 [2024-11-20 16:19:46.673506] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.673510] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614580) on tqpair=0x5b2690 00:25:10.792 [2024-11-20 16:19:46.673520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.673524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.673527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b2690) 00:25:10.792 [2024-11-20 16:19:46.673537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.792 [2024-11-20 16:19:46.673547] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614580, cid 3, qid 0 00:25:10.792 [2024-11-20 16:19:46.673798] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.792 [2024-11-20 16:19:46.673805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.792 [2024-11-20 16:19:46.673808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.673812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614580) on tqpair=0x5b2690 00:25:10.792 [2024-11-20 16:19:46.673822] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.673826] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.673829] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b2690) 00:25:10.792 [2024-11-20 16:19:46.673836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.792 [2024-11-20 16:19:46.673846] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614580, cid 3, qid 0 00:25:10.792 [2024-11-20 16:19:46.674101] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.792 [2024-11-20 16:19:46.674108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.792 [2024-11-20 16:19:46.674111] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.674115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614580) on tqpair=0x5b2690 00:25:10.792 [2024-11-20 16:19:46.674125] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.674129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.674132] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b2690) 00:25:10.792 [2024-11-20 16:19:46.674139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.792 [2024-11-20 16:19:46.674149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614580, cid 3, qid 0 00:25:10.792 [2024-11-20 16:19:46.674337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.792 [2024-11-20 16:19:46.674344] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.792 [2024-11-20 16:19:46.674347] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.674351] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614580) on tqpair=0x5b2690 00:25:10.792 [2024-11-20 16:19:46.674361] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.674365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.674369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b2690) 00:25:10.792 [2024-11-20 16:19:46.674375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.792 [2024-11-20 16:19:46.674386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614580, cid 3, qid 0 00:25:10.792 [2024-11-20 16:19:46.674606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.792 [2024-11-20 16:19:46.674612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.792 [2024-11-20 16:19:46.674615] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.674619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614580) on tqpair=0x5b2690 00:25:10.792 [2024-11-20 16:19:46.674629] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.674633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.674636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b2690) 00:25:10.792 [2024-11-20 16:19:46.674643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.792 [2024-11-20 16:19:46.674656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614580, cid 3, qid 0 00:25:10.792 [2024-11-20 16:19:46.674908] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.792 [2024-11-20 16:19:46.674914] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.792 [2024-11-20 16:19:46.674918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.674922] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614580) on tqpair=0x5b2690 00:25:10.792 [2024-11-20 16:19:46.674931] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.674935] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.674939] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b2690) 00:25:10.792 [2024-11-20 16:19:46.674945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.792 [2024-11-20 16:19:46.674955] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614580, cid 3, qid 0 00:25:10.792 [2024-11-20 16:19:46.675211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.792 [2024-11-20 16:19:46.675217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.792 [2024-11-20 16:19:46.675221] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.675225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614580) on tqpair=0x5b2690 00:25:10.792 [2024-11-20 16:19:46.675235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.792 [2024-11-20 16:19:46.675239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.793 [2024-11-20 16:19:46.675242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b2690) 00:25:10.793 [2024-11-20 16:19:46.675249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.793 [2024-11-20 16:19:46.675259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614580, cid 3, qid 0 00:25:10.793 [2024-11-20 16:19:46.675446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.793 [2024-11-20 16:19:46.675452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.793 [2024-11-20 16:19:46.675456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.793 [2024-11-20 16:19:46.675459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614580) on tqpair=0x5b2690 00:25:10.793 [2024-11-20 16:19:46.675469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.793 [2024-11-20 16:19:46.675473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.793 [2024-11-20 16:19:46.675477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b2690) 00:25:10.793 [2024-11-20 16:19:46.675483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.793 [2024-11-20 16:19:46.675493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614580, cid 3, qid 0 00:25:10.793 [2024-11-20 16:19:46.675713] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.793 [2024-11-20 16:19:46.675720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.793 [2024-11-20 16:19:46.675723] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.793 [2024-11-20 16:19:46.675727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614580) on tqpair=0x5b2690 00:25:10.793 [2024-11-20 16:19:46.675737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.793 [2024-11-20 16:19:46.675741] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.793 [2024-11-20 16:19:46.675744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b2690) 00:25:10.793 [2024-11-20 16:19:46.675751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.793 [2024-11-20 16:19:46.675761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614580, cid 3, qid 0 00:25:10.793 [2024-11-20 16:19:46.675967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.793 [2024-11-20 16:19:46.675973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.793 [2024-11-20 16:19:46.675977] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.793 [2024-11-20 16:19:46.675981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614580) on tqpair=0x5b2690 00:25:10.793 [2024-11-20 16:19:46.675991] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:10.793 [2024-11-20 16:19:46.675995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:10.793 [2024-11-20 16:19:46.675998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5b2690) 00:25:10.793 [2024-11-20 16:19:46.676005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.793 [2024-11-20 16:19:46.676015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x614580, cid 3, qid 0 00:25:10.793 [2024-11-20 16:19:46.680169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:10.793 [2024-11-20 16:19:46.680177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:10.793 [2024-11-20 16:19:46.680181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:10.793 [2024-11-20 16:19:46.680185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x614580) on tqpair=0x5b2690 00:25:10.793 [2024-11-20 16:19:46.680193] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:25:10.793 00:25:10.793 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:11.058 [2024-11-20 16:19:46.726644] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:25:11.058 [2024-11-20 16:19:46.726688] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1377426 ] 00:25:11.058 [2024-11-20 16:19:46.783678] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:25:11.058 [2024-11-20 16:19:46.783744] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:11.058 [2024-11-20 16:19:46.783750] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:11.058 [2024-11-20 16:19:46.783770] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:11.058 [2024-11-20 16:19:46.783782] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:11.058 [2024-11-20 16:19:46.784626] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:25:11.058 [2024-11-20 16:19:46.784669] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19a2690 0 00:25:11.058 [2024-11-20 16:19:46.795174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:11.058 [2024-11-20 16:19:46.795190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:11.058 [2024-11-20 16:19:46.795195] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:11.058 [2024-11-20 16:19:46.795199] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:11.058 [2024-11-20 16:19:46.795237] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.058 [2024-11-20 16:19:46.795244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.058 [2024-11-20 16:19:46.795248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a2690) 00:25:11.058 [2024-11-20 16:19:46.795268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:11.058 [2024-11-20 16:19:46.795293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04100, cid 0, qid 0 00:25:11.058 [2024-11-20 16:19:46.803176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.058 [2024-11-20 16:19:46.803186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.058 [2024-11-20 16:19:46.803190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.058 [2024-11-20 16:19:46.803195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04100) on tqpair=0x19a2690 00:25:11.058 [2024-11-20 16:19:46.803208] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:11.058 [2024-11-20 16:19:46.803215] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:25:11.058 [2024-11-20 16:19:46.803221] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:25:11.058 [2024-11-20 16:19:46.803235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.058 [2024-11-20 16:19:46.803239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.058 [2024-11-20 16:19:46.803243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a2690) 00:25:11.058 [2024-11-20 16:19:46.803252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-11-20 16:19:46.803268] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04100, cid 0, qid 0 00:25:11.058 [2024-11-20 16:19:46.803487] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.058 [2024-11-20 16:19:46.803494] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.058 [2024-11-20 16:19:46.803498] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.058 [2024-11-20 16:19:46.803502] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04100) on tqpair=0x19a2690 00:25:11.058 [2024-11-20 16:19:46.803507] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:25:11.058 [2024-11-20 16:19:46.803514] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:25:11.058 [2024-11-20 16:19:46.803522] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.058 [2024-11-20 16:19:46.803526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.058 [2024-11-20 16:19:46.803529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a2690) 00:25:11.058 [2024-11-20 16:19:46.803537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-11-20 16:19:46.803548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04100, cid 0, qid 0 00:25:11.058 [2024-11-20 16:19:46.803802] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.058 [2024-11-20 16:19:46.803808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.058 [2024-11-20 16:19:46.803811] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.058 [2024-11-20 16:19:46.803815] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04100) on tqpair=0x19a2690 00:25:11.058 [2024-11-20 16:19:46.803820] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:25:11.058 [2024-11-20 16:19:46.803829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:11.058 [2024-11-20 16:19:46.803836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.058 [2024-11-20 16:19:46.803840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.058 [2024-11-20 16:19:46.803843] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a2690) 00:25:11.058 [2024-11-20 16:19:46.803850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-11-20 16:19:46.803865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04100, cid 0, qid 0 00:25:11.058 [2024-11-20 16:19:46.804086] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.058 [2024-11-20 16:19:46.804092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.058 [2024-11-20 16:19:46.804096] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.058 [2024-11-20 16:19:46.804100] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04100) on tqpair=0x19a2690 00:25:11.058 [2024-11-20 16:19:46.804105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:11.058 [2024-11-20 16:19:46.804114] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.058 [2024-11-20 16:19:46.804118] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.058 [2024-11-20 16:19:46.804122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a2690) 00:25:11.058 [2024-11-20 16:19:46.804129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-11-20 16:19:46.804139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04100, cid 0, qid 0 00:25:11.058 [2024-11-20 16:19:46.804337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.058 [2024-11-20 16:19:46.804346] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.058 [2024-11-20 16:19:46.804349] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.058 [2024-11-20 16:19:46.804353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04100) on tqpair=0x19a2690 00:25:11.058 [2024-11-20 16:19:46.804358] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:11.058 [2024-11-20 16:19:46.804363] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:11.058 [2024-11-20 16:19:46.804371] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:11.058 [2024-11-20 16:19:46.804480] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:25:11.058 [2024-11-20 16:19:46.804485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:11.058 [2024-11-20 16:19:46.804493] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.058 [2024-11-20 16:19:46.804497] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.058 [2024-11-20 16:19:46.804501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a2690) 00:25:11.058 [2024-11-20 16:19:46.804508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-11-20 16:19:46.804519] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04100, cid 0, qid 0 00:25:11.058 [2024-11-20 16:19:46.804728] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.058 [2024-11-20 16:19:46.804735] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.058 [2024-11-20 16:19:46.804739] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.058 [2024-11-20 16:19:46.804743] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04100) on tqpair=0x19a2690 00:25:11.058 [2024-11-20 16:19:46.804748] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:11.058 [2024-11-20 16:19:46.804757] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.058 [2024-11-20 16:19:46.804761] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.058 [2024-11-20 16:19:46.804765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a2690) 00:25:11.058 [2024-11-20 16:19:46.804774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-11-20 16:19:46.804785] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04100, cid 0, qid 0 00:25:11.058 [2024-11-20 16:19:46.804975] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.058 [2024-11-20 16:19:46.804982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.058 [2024-11-20 16:19:46.804985] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.058 [2024-11-20 16:19:46.804989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04100) on tqpair=0x19a2690 00:25:11.058 [2024-11-20 16:19:46.804994] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:11.059 [2024-11-20 16:19:46.804999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:11.059 [2024-11-20 16:19:46.805007] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:25:11.059 [2024-11-20 16:19:46.805020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:11.059 [2024-11-20 16:19:46.805030] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.805034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a2690) 00:25:11.059 [2024-11-20 16:19:46.805041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.059 [2024-11-20 16:19:46.805052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04100, cid 0, qid 0 00:25:11.059 [2024-11-20 16:19:46.805318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.059 [2024-11-20 16:19:46.805324] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.059 [2024-11-20 16:19:46.805328] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.805333] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19a2690): datao=0, datal=4096, cccid=0 00:25:11.059 [2024-11-20 16:19:46.805337] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a04100) on tqpair(0x19a2690): expected_datao=0, payload_size=4096 00:25:11.059 [2024-11-20 16:19:46.805342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.805362] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.805367] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.805551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.059 [2024-11-20 16:19:46.805557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.059 [2024-11-20 16:19:46.805561] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.805564] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04100) on tqpair=0x19a2690 00:25:11.059 [2024-11-20 16:19:46.805573] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:25:11.059 [2024-11-20 16:19:46.805578] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:25:11.059 [2024-11-20 16:19:46.805583] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:25:11.059 [2024-11-20 16:19:46.805593] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:25:11.059 [2024-11-20 16:19:46.805598] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:25:11.059 [2024-11-20 16:19:46.805603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:25:11.059 [2024-11-20 16:19:46.805614] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:11.059 [2024-11-20 16:19:46.805624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.805628] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.805631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a2690) 00:25:11.059 [2024-11-20 16:19:46.805639] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:11.059 [2024-11-20 16:19:46.805650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04100, cid 0, qid 0 00:25:11.059 [2024-11-20 16:19:46.805834] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.059 [2024-11-20 16:19:46.805841] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.059 [2024-11-20 16:19:46.805844] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.805848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04100) on tqpair=0x19a2690 00:25:11.059 [2024-11-20 16:19:46.805855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.805859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.805863] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a2690) 00:25:11.059 [2024-11-20 16:19:46.805869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.059 [2024-11-20 16:19:46.805875] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.805879] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.805883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19a2690) 00:25:11.059 [2024-11-20 16:19:46.805889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.059 [2024-11-20 16:19:46.805895] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.805899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.805902] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19a2690) 00:25:11.059 [2024-11-20 16:19:46.805908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.059 [2024-11-20 16:19:46.805914] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.805918] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.805921] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a2690) 00:25:11.059 [2024-11-20 16:19:46.805927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.059 [2024-11-20 16:19:46.805932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:11.059 [2024-11-20 16:19:46.805941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:11.059 [2024-11-20 16:19:46.805947] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.805951] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19a2690) 00:25:11.059 [2024-11-20 16:19:46.805958] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.059 [2024-11-20 16:19:46.805970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04100, cid 0, qid 0 00:25:11.059 [2024-11-20 16:19:46.805975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04280, cid 1, qid 0 00:25:11.059 [2024-11-20 16:19:46.805980] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04400, cid 2, qid 0 00:25:11.059 [2024-11-20 16:19:46.805987] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04580, cid 3, qid 0 00:25:11.059 [2024-11-20 16:19:46.805992] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04700, cid 4, qid 0 00:25:11.059 [2024-11-20 16:19:46.806246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.059 [2024-11-20 16:19:46.806253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.059 [2024-11-20 16:19:46.806256] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.806260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04700) on tqpair=0x19a2690 00:25:11.059 [2024-11-20 16:19:46.806267] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:25:11.059 [2024-11-20 16:19:46.806273] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:11.059 [2024-11-20 16:19:46.806282] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:25:11.059 [2024-11-20 16:19:46.806289] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:11.059 [2024-11-20 16:19:46.806295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.806299] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.806303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19a2690) 00:25:11.059 [2024-11-20 16:19:46.806309] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:11.059 [2024-11-20 16:19:46.806320] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04700, cid 4, qid 0 00:25:11.059 [2024-11-20 16:19:46.806508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.059 [2024-11-20 16:19:46.806515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.059 [2024-11-20 16:19:46.806519] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.806523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04700) on tqpair=0x19a2690 00:25:11.059 [2024-11-20 16:19:46.806590] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:25:11.059 [2024-11-20 16:19:46.806600] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:11.059 [2024-11-20 16:19:46.806608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.806612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19a2690) 00:25:11.059 [2024-11-20 16:19:46.806619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.059 [2024-11-20 16:19:46.806630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04700, cid 4, qid 0 00:25:11.059 [2024-11-20 16:19:46.806850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.059 [2024-11-20 16:19:46.806857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.059 [2024-11-20 16:19:46.806860] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.806864] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19a2690): datao=0, datal=4096, cccid=4 00:25:11.059 [2024-11-20 16:19:46.806869] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a04700) on tqpair(0x19a2690): expected_datao=0, payload_size=4096 00:25:11.059 [2024-11-20 16:19:46.806873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.806880] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.806884] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.807042] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.059 [2024-11-20 16:19:46.807048] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.059 [2024-11-20 16:19:46.807052] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.059 [2024-11-20 16:19:46.807056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04700) on tqpair=0x19a2690 00:25:11.059 [2024-11-20 16:19:46.807067] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:25:11.059 [2024-11-20 16:19:46.807078] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:25:11.060 [2024-11-20 16:19:46.807087] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:25:11.060 [2024-11-20 16:19:46.807095] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.807098] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19a2690) 00:25:11.060 [2024-11-20 16:19:46.807105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.060 [2024-11-20 16:19:46.807116] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04700, cid 4, qid 0 00:25:11.060 [2024-11-20 16:19:46.811172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.060 [2024-11-20 16:19:46.811181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.060 [2024-11-20 16:19:46.811184] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.811188] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19a2690): datao=0, datal=4096, cccid=4 00:25:11.060 [2024-11-20 16:19:46.811193] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a04700) on tqpair(0x19a2690): expected_datao=0, payload_size=4096 00:25:11.060 [2024-11-20 16:19:46.811197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.811204] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.811208] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.811213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.060 [2024-11-20 16:19:46.811219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.060 [2024-11-20 16:19:46.811223] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.811227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04700) on tqpair=0x19a2690 00:25:11.060 [2024-11-20 16:19:46.811242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:11.060 [2024-11-20 16:19:46.811253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:11.060 [2024-11-20 16:19:46.811261] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.811264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19a2690) 00:25:11.060 [2024-11-20 16:19:46.811271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.060 [2024-11-20 16:19:46.811283] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04700, cid 4, qid 0 00:25:11.060 [2024-11-20 16:19:46.811475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.060 [2024-11-20 16:19:46.811482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.060 [2024-11-20 16:19:46.811485] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.811489] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19a2690): datao=0, datal=4096, cccid=4 00:25:11.060 [2024-11-20 16:19:46.811493] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a04700) on tqpair(0x19a2690): expected_datao=0, payload_size=4096 00:25:11.060 [2024-11-20 16:19:46.811501] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.811515] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.811519] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.853338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.060 [2024-11-20 16:19:46.853350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.060 [2024-11-20 16:19:46.853354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.853358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04700) on tqpair=0x19a2690 00:25:11.060 [2024-11-20 16:19:46.853368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:11.060 [2024-11-20 16:19:46.853378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:25:11.060 [2024-11-20 16:19:46.853389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:25:11.060 [2024-11-20 16:19:46.853395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:11.060 [2024-11-20 16:19:46.853401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:11.060 [2024-11-20 16:19:46.853407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:25:11.060 [2024-11-20 16:19:46.853412] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:25:11.060 [2024-11-20 16:19:46.853417] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:25:11.060 [2024-11-20 16:19:46.853423] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:25:11.060 [2024-11-20 16:19:46.853441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.853445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19a2690) 00:25:11.060 [2024-11-20 16:19:46.853453] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.060 [2024-11-20 16:19:46.853460] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.853464] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.853467] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19a2690) 00:25:11.060 [2024-11-20 16:19:46.853474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.060 [2024-11-20 16:19:46.853490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04700, cid 4, qid 0 00:25:11.060 [2024-11-20 16:19:46.853496] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04880, cid 5, qid 0 00:25:11.060 [2024-11-20 16:19:46.853626] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.060 [2024-11-20 16:19:46.853633] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.060 [2024-11-20 16:19:46.853636] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.853640] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04700) on tqpair=0x19a2690 00:25:11.060 [2024-11-20 16:19:46.853647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.060 [2024-11-20 16:19:46.853653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.060 [2024-11-20 16:19:46.853657] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.853664] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04880) on tqpair=0x19a2690 00:25:11.060 [2024-11-20 16:19:46.853674] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.853677] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19a2690) 00:25:11.060 [2024-11-20 16:19:46.853684] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.060 [2024-11-20 16:19:46.853695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04880, cid 5, qid 0 00:25:11.060 [2024-11-20 16:19:46.853873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.060 [2024-11-20 16:19:46.853880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.060 [2024-11-20 16:19:46.853883] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.853887] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04880) on tqpair=0x19a2690 00:25:11.060 [2024-11-20 16:19:46.853896] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.853900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19a2690) 00:25:11.060 [2024-11-20 16:19:46.853907] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.060 [2024-11-20 16:19:46.853917] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04880, cid 5, qid 0 00:25:11.060 [2024-11-20 16:19:46.854131] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.060 [2024-11-20 16:19:46.854138] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.060 [2024-11-20 16:19:46.854141] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.854145] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04880) on tqpair=0x19a2690 00:25:11.060 [2024-11-20 16:19:46.854155] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.854165] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19a2690) 00:25:11.060 [2024-11-20 16:19:46.854172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.060 [2024-11-20 16:19:46.854182] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04880, cid 5, qid 0 00:25:11.060 [2024-11-20 16:19:46.854432] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.060 [2024-11-20 16:19:46.854439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.060 [2024-11-20 16:19:46.854442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.854446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04880) on tqpair=0x19a2690 00:25:11.060 [2024-11-20 16:19:46.854463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.854468] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19a2690) 00:25:11.060 [2024-11-20 16:19:46.854474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.060 [2024-11-20 16:19:46.854482] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.854485] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19a2690) 00:25:11.060 [2024-11-20 16:19:46.854492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.060 [2024-11-20 16:19:46.854499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.854503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x19a2690) 00:25:11.060 [2024-11-20 16:19:46.854509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.060 [2024-11-20 16:19:46.854519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.060 [2024-11-20 16:19:46.854523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19a2690) 00:25:11.060 [2024-11-20 16:19:46.854529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.060 [2024-11-20 16:19:46.854541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04880, cid 5, qid 0 00:25:11.061 [2024-11-20 16:19:46.854547] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04700, cid 4, qid 0 00:25:11.061 [2024-11-20 16:19:46.854551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04a00, cid 6, qid 0 00:25:11.061 [2024-11-20 16:19:46.854556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04b80, cid 7, qid 0 00:25:11.061 [2024-11-20 16:19:46.854862] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.061 [2024-11-20 16:19:46.854870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.061 [2024-11-20 16:19:46.854873] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.061 [2024-11-20 16:19:46.854877] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19a2690): datao=0, datal=8192, cccid=5 00:25:11.061 [2024-11-20 16:19:46.854882] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a04880) on tqpair(0x19a2690): expected_datao=0, payload_size=8192 00:25:11.061 [2024-11-20 16:19:46.854886] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.061 [2024-11-20 16:19:46.854953] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.061 [2024-11-20 16:19:46.854958] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.061 [2024-11-20 16:19:46.854964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.061 [2024-11-20 16:19:46.854970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.061 [2024-11-20 16:19:46.854973] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.061 [2024-11-20 16:19:46.854977] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19a2690): datao=0, datal=512, cccid=4 00:25:11.061 [2024-11-20 16:19:46.854981] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a04700) on tqpair(0x19a2690): expected_datao=0, payload_size=512 00:25:11.061 [2024-11-20 16:19:46.854986] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.061 [2024-11-20 16:19:46.854992] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.061 [2024-11-20 16:19:46.854996] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.061 [2024-11-20 16:19:46.855001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.061 [2024-11-20 16:19:46.855007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.061 [2024-11-20 16:19:46.855011] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.061 [2024-11-20 16:19:46.855014] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19a2690): datao=0, datal=512, cccid=6 00:25:11.061 [2024-11-20 16:19:46.855019] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a04a00) on tqpair(0x19a2690): expected_datao=0, payload_size=512 00:25:11.061 [2024-11-20 16:19:46.855023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.061 [2024-11-20 16:19:46.855029] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.061 [2024-11-20 16:19:46.855033] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.061 [2024-11-20 16:19:46.855039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.061 [2024-11-20 16:19:46.855044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.061 [2024-11-20 16:19:46.855048] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.061 [2024-11-20 16:19:46.855051] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19a2690): datao=0, datal=4096, cccid=7 00:25:11.061 [2024-11-20 16:19:46.855056] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a04b80) on tqpair(0x19a2690): expected_datao=0, payload_size=4096 00:25:11.061 [2024-11-20 16:19:46.855065] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.061 [2024-11-20 16:19:46.855084] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.061 [2024-11-20 16:19:46.855089] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.061 [2024-11-20 16:19:46.899171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.061 [2024-11-20 16:19:46.899181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.061 [2024-11-20 16:19:46.899185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.061 [2024-11-20 16:19:46.899189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04880) on tqpair=0x19a2690 00:25:11.061 [2024-11-20 16:19:46.899204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.061 [2024-11-20 16:19:46.899210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.061 [2024-11-20 16:19:46.899214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.061 [2024-11-20 16:19:46.899217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04700) on tqpair=0x19a2690 00:25:11.061 [2024-11-20 16:19:46.899228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.061 [2024-11-20 16:19:46.899234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.061 [2024-11-20 16:19:46.899238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.061 [2024-11-20 16:19:46.899242] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04a00) on tqpair=0x19a2690 00:25:11.061 [2024-11-20 16:19:46.899249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.061 [2024-11-20 16:19:46.899255] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.061 [2024-11-20 16:19:46.899258] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.061 [2024-11-20 16:19:46.899262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04b80) on tqpair=0x19a2690 00:25:11.061 ===================================================== 00:25:11.061 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:11.061 ===================================================== 00:25:11.061 Controller Capabilities/Features 00:25:11.061 ================================ 00:25:11.061 Vendor ID: 8086 00:25:11.061 Subsystem Vendor ID: 8086 00:25:11.061 Serial Number: SPDK00000000000001 00:25:11.061 Model Number: SPDK bdev Controller 00:25:11.061 Firmware Version: 25.01 00:25:11.061 Recommended Arb Burst: 6 00:25:11.061 IEEE OUI Identifier: e4 d2 5c 00:25:11.061 Multi-path I/O 00:25:11.061 May have multiple subsystem ports: Yes 00:25:11.061 May have multiple controllers: Yes 00:25:11.061 Associated with SR-IOV VF: No 00:25:11.061 Max Data Transfer Size: 131072 00:25:11.061 Max Number of Namespaces: 32 00:25:11.061 Max Number of I/O Queues: 127 00:25:11.061 NVMe Specification Version (VS): 1.3 00:25:11.061 NVMe Specification Version (Identify): 1.3 00:25:11.061 Maximum Queue Entries: 128 00:25:11.061 Contiguous Queues Required: Yes 00:25:11.061 Arbitration Mechanisms Supported 00:25:11.061 Weighted Round Robin: Not Supported 00:25:11.061 Vendor Specific: Not Supported 00:25:11.061 Reset Timeout: 15000 ms 00:25:11.061 Doorbell Stride: 4 bytes 00:25:11.061 NVM Subsystem Reset: Not Supported 00:25:11.061 Command Sets Supported 00:25:11.061 NVM Command Set: Supported 00:25:11.061 Boot Partition: Not Supported 00:25:11.061 Memory Page Size Minimum: 4096 bytes 00:25:11.061 Memory Page Size Maximum: 4096 bytes 00:25:11.061 Persistent Memory Region: Not Supported 00:25:11.061 Optional Asynchronous Events Supported 00:25:11.061 Namespace Attribute Notices: Supported 00:25:11.061 Firmware Activation Notices: Not Supported 00:25:11.061 ANA Change Notices: Not Supported 00:25:11.061 PLE Aggregate Log Change Notices: Not Supported 00:25:11.061 LBA Status Info Alert Notices: Not Supported 00:25:11.061 EGE Aggregate Log Change Notices: Not Supported 00:25:11.061 Normal NVM Subsystem Shutdown event: Not Supported 00:25:11.061 Zone Descriptor Change Notices: Not Supported 00:25:11.061 Discovery Log Change Notices: Not Supported 00:25:11.061 Controller Attributes 00:25:11.061 128-bit Host Identifier: Supported 00:25:11.061 Non-Operational Permissive Mode: Not Supported 00:25:11.061 NVM Sets: Not Supported 00:25:11.061 Read Recovery Levels: Not Supported 00:25:11.061 Endurance Groups: Not Supported 00:25:11.061 Predictable Latency Mode: Not Supported 00:25:11.061 Traffic Based Keep ALive: Not Supported 00:25:11.061 Namespace Granularity: Not Supported 00:25:11.061 SQ Associations: Not Supported 00:25:11.061 UUID List: Not Supported 00:25:11.061 Multi-Domain Subsystem: Not Supported 00:25:11.061 Fixed Capacity Management: Not Supported 00:25:11.061 Variable Capacity Management: Not Supported 00:25:11.061 Delete Endurance Group: Not Supported 00:25:11.061 Delete NVM Set: Not Supported 00:25:11.061 Extended LBA Formats Supported: Not Supported 00:25:11.061 Flexible Data Placement Supported: Not Supported 00:25:11.061 00:25:11.061 Controller Memory Buffer Support 00:25:11.061 ================================ 00:25:11.061 Supported: No 00:25:11.061 00:25:11.061 Persistent Memory Region Support 00:25:11.061 ================================ 00:25:11.061 Supported: No 00:25:11.061 00:25:11.061 Admin Command Set Attributes 00:25:11.061 ============================ 00:25:11.061 Security Send/Receive: Not Supported 00:25:11.061 Format NVM: Not Supported 00:25:11.061 Firmware Activate/Download: Not Supported 00:25:11.061 Namespace Management: Not Supported 00:25:11.061 Device Self-Test: Not Supported 00:25:11.061 Directives: Not Supported 00:25:11.061 NVMe-MI: Not Supported 00:25:11.061 Virtualization Management: Not Supported 00:25:11.061 Doorbell Buffer Config: Not Supported 00:25:11.061 Get LBA Status Capability: Not Supported 00:25:11.061 Command & Feature Lockdown Capability: Not Supported 00:25:11.061 Abort Command Limit: 4 00:25:11.061 Async Event Request Limit: 4 00:25:11.061 Number of Firmware Slots: N/A 00:25:11.061 Firmware Slot 1 Read-Only: N/A 00:25:11.061 Firmware Activation Without Reset: N/A 00:25:11.061 Multiple Update Detection Support: N/A 00:25:11.061 Firmware Update Granularity: No Information Provided 00:25:11.061 Per-Namespace SMART Log: No 00:25:11.061 Asymmetric Namespace Access Log Page: Not Supported 00:25:11.061 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:11.061 Command Effects Log Page: Supported 00:25:11.061 Get Log Page Extended Data: Supported 00:25:11.061 Telemetry Log Pages: Not Supported 00:25:11.061 Persistent Event Log Pages: Not Supported 00:25:11.061 Supported Log Pages Log Page: May Support 00:25:11.061 Commands Supported & Effects Log Page: Not Supported 00:25:11.061 Feature Identifiers & Effects Log Page:May Support 00:25:11.062 NVMe-MI Commands & Effects Log Page: May Support 00:25:11.062 Data Area 4 for Telemetry Log: Not Supported 00:25:11.062 Error Log Page Entries Supported: 128 00:25:11.062 Keep Alive: Supported 00:25:11.062 Keep Alive Granularity: 10000 ms 00:25:11.062 00:25:11.062 NVM Command Set Attributes 00:25:11.062 ========================== 00:25:11.062 Submission Queue Entry Size 00:25:11.062 Max: 64 00:25:11.062 Min: 64 00:25:11.062 Completion Queue Entry Size 00:25:11.062 Max: 16 00:25:11.062 Min: 16 00:25:11.062 Number of Namespaces: 32 00:25:11.062 Compare Command: Supported 00:25:11.062 Write Uncorrectable Command: Not Supported 00:25:11.062 Dataset Management Command: Supported 00:25:11.062 Write Zeroes Command: Supported 00:25:11.062 Set Features Save Field: Not Supported 00:25:11.062 Reservations: Supported 00:25:11.062 Timestamp: Not Supported 00:25:11.062 Copy: Supported 00:25:11.062 Volatile Write Cache: Present 00:25:11.062 Atomic Write Unit (Normal): 1 00:25:11.062 Atomic Write Unit (PFail): 1 00:25:11.062 Atomic Compare & Write Unit: 1 00:25:11.062 Fused Compare & Write: Supported 00:25:11.062 Scatter-Gather List 00:25:11.062 SGL Command Set: Supported 00:25:11.062 SGL Keyed: Supported 00:25:11.062 SGL Bit Bucket Descriptor: Not Supported 00:25:11.062 SGL Metadata Pointer: Not Supported 00:25:11.062 Oversized SGL: Not Supported 00:25:11.062 SGL Metadata Address: Not Supported 00:25:11.062 SGL Offset: Supported 00:25:11.062 Transport SGL Data Block: Not Supported 00:25:11.062 Replay Protected Memory Block: Not Supported 00:25:11.062 00:25:11.062 Firmware Slot Information 00:25:11.062 ========================= 00:25:11.062 Active slot: 1 00:25:11.062 Slot 1 Firmware Revision: 25.01 00:25:11.062 00:25:11.062 00:25:11.062 Commands Supported and Effects 00:25:11.062 ============================== 00:25:11.062 Admin Commands 00:25:11.062 -------------- 00:25:11.062 Get Log Page (02h): Supported 00:25:11.062 Identify (06h): Supported 00:25:11.062 Abort (08h): Supported 00:25:11.062 Set Features (09h): Supported 00:25:11.062 Get Features (0Ah): Supported 00:25:11.062 Asynchronous Event Request (0Ch): Supported 00:25:11.062 Keep Alive (18h): Supported 00:25:11.062 I/O Commands 00:25:11.062 ------------ 00:25:11.062 Flush (00h): Supported LBA-Change 00:25:11.062 Write (01h): Supported LBA-Change 00:25:11.062 Read (02h): Supported 00:25:11.062 Compare (05h): Supported 00:25:11.062 Write Zeroes (08h): Supported LBA-Change 00:25:11.062 Dataset Management (09h): Supported LBA-Change 00:25:11.062 Copy (19h): Supported LBA-Change 00:25:11.062 00:25:11.062 Error Log 00:25:11.062 ========= 00:25:11.062 00:25:11.062 Arbitration 00:25:11.062 =========== 00:25:11.062 Arbitration Burst: 1 00:25:11.062 00:25:11.062 Power Management 00:25:11.062 ================ 00:25:11.062 Number of Power States: 1 00:25:11.062 Current Power State: Power State #0 00:25:11.062 Power State #0: 00:25:11.062 Max Power: 0.00 W 00:25:11.062 Non-Operational State: Operational 00:25:11.062 Entry Latency: Not Reported 00:25:11.062 Exit Latency: Not Reported 00:25:11.062 Relative Read Throughput: 0 00:25:11.062 Relative Read Latency: 0 00:25:11.062 Relative Write Throughput: 0 00:25:11.062 Relative Write Latency: 0 00:25:11.062 Idle Power: Not Reported 00:25:11.062 Active Power: Not Reported 00:25:11.062 Non-Operational Permissive Mode: Not Supported 00:25:11.062 00:25:11.062 Health Information 00:25:11.062 ================== 00:25:11.062 Critical Warnings: 00:25:11.062 Available Spare Space: OK 00:25:11.062 Temperature: OK 00:25:11.062 Device Reliability: OK 00:25:11.062 Read Only: No 00:25:11.062 Volatile Memory Backup: OK 00:25:11.062 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:11.062 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:11.062 Available Spare: 0% 00:25:11.062 Available Spare Threshold: 0% 00:25:11.062 Life Percentage Used:[2024-11-20 16:19:46.899367] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.062 [2024-11-20 16:19:46.899373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19a2690) 00:25:11.062 [2024-11-20 16:19:46.899381] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.062 [2024-11-20 16:19:46.899395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04b80, cid 7, qid 0 00:25:11.062 [2024-11-20 16:19:46.899600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.062 [2024-11-20 16:19:46.899607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.062 [2024-11-20 16:19:46.899610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.062 [2024-11-20 16:19:46.899614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04b80) on tqpair=0x19a2690 00:25:11.062 [2024-11-20 16:19:46.899649] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:25:11.062 [2024-11-20 16:19:46.899659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04100) on tqpair=0x19a2690 00:25:11.062 [2024-11-20 16:19:46.899666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.062 [2024-11-20 16:19:46.899672] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04280) on tqpair=0x19a2690 00:25:11.062 [2024-11-20 16:19:46.899677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.062 [2024-11-20 16:19:46.899682] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04400) on tqpair=0x19a2690 00:25:11.062 [2024-11-20 16:19:46.899687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.062 [2024-11-20 16:19:46.899692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04580) on tqpair=0x19a2690 00:25:11.062 [2024-11-20 16:19:46.899697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.062 [2024-11-20 16:19:46.899707] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.062 [2024-11-20 16:19:46.899711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.062 [2024-11-20 16:19:46.899715] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a2690) 00:25:11.062 [2024-11-20 16:19:46.899722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.062 [2024-11-20 16:19:46.899735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04580, cid 3, qid 0 00:25:11.062 [2024-11-20 16:19:46.899955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.062 [2024-11-20 16:19:46.899961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.062 [2024-11-20 16:19:46.899964] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.062 [2024-11-20 16:19:46.899968] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04580) on tqpair=0x19a2690 00:25:11.062 [2024-11-20 16:19:46.899975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.062 [2024-11-20 16:19:46.899979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.062 [2024-11-20 16:19:46.899983] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a2690) 00:25:11.062 [2024-11-20 16:19:46.899990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.062 [2024-11-20 16:19:46.900004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04580, cid 3, qid 0 00:25:11.062 [2024-11-20 16:19:46.900213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.062 [2024-11-20 16:19:46.900220] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.062 [2024-11-20 16:19:46.900223] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.062 [2024-11-20 16:19:46.900227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04580) on tqpair=0x19a2690 00:25:11.062 [2024-11-20 16:19:46.900232] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:25:11.062 [2024-11-20 16:19:46.900237] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:25:11.062 [2024-11-20 16:19:46.900247] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.062 [2024-11-20 16:19:46.900251] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.063 [2024-11-20 16:19:46.900254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a2690) 00:25:11.063 [2024-11-20 16:19:46.900261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.063 [2024-11-20 16:19:46.900272] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04580, cid 3, qid 0 00:25:11.063 [2024-11-20 16:19:46.900442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.063 [2024-11-20 16:19:46.900449] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.063 [2024-11-20 16:19:46.900452] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.063 [2024-11-20 16:19:46.900456] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04580) on tqpair=0x19a2690 00:25:11.063 [2024-11-20 16:19:46.900467] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.063 [2024-11-20 16:19:46.900471] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.063 [2024-11-20 16:19:46.900474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a2690) 00:25:11.063 [2024-11-20 16:19:46.900481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.063 [2024-11-20 16:19:46.900491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04580, cid 3, qid 0 00:25:11.063 [2024-11-20 16:19:46.904169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.063 [2024-11-20 16:19:46.904179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.063 [2024-11-20 16:19:46.904188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.063 [2024-11-20 16:19:46.904192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04580) on tqpair=0x19a2690 00:25:11.063 [2024-11-20 16:19:46.904203] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.063 [2024-11-20 16:19:46.904207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.063 [2024-11-20 16:19:46.904210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a2690) 00:25:11.063 [2024-11-20 16:19:46.904217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.063 [2024-11-20 16:19:46.904229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a04580, cid 3, qid 0 00:25:11.063 [2024-11-20 16:19:46.904413] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.063 [2024-11-20 16:19:46.904419] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.063 [2024-11-20 16:19:46.904423] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.063 [2024-11-20 16:19:46.904426] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a04580) on tqpair=0x19a2690 00:25:11.063 [2024-11-20 16:19:46.904434] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:25:11.063 0% 00:25:11.063 Data Units Read: 0 00:25:11.063 Data Units Written: 0 00:25:11.063 Host Read Commands: 0 00:25:11.063 Host Write Commands: 0 00:25:11.063 Controller Busy Time: 0 minutes 00:25:11.063 Power Cycles: 0 00:25:11.063 Power On Hours: 0 hours 00:25:11.063 Unsafe Shutdowns: 0 00:25:11.063 Unrecoverable Media Errors: 0 00:25:11.063 Lifetime Error Log Entries: 0 00:25:11.063 Warning Temperature Time: 0 minutes 00:25:11.063 Critical Temperature Time: 0 minutes 00:25:11.063 00:25:11.063 Number of Queues 00:25:11.063 ================ 00:25:11.063 Number of I/O Submission Queues: 127 00:25:11.063 Number of I/O Completion Queues: 127 00:25:11.063 00:25:11.063 Active Namespaces 00:25:11.063 ================= 00:25:11.063 Namespace ID:1 00:25:11.063 Error Recovery Timeout: Unlimited 00:25:11.063 Command Set Identifier: NVM (00h) 00:25:11.063 Deallocate: Supported 00:25:11.063 Deallocated/Unwritten Error: Not Supported 00:25:11.063 Deallocated Read Value: Unknown 00:25:11.063 Deallocate in Write Zeroes: Not Supported 00:25:11.063 Deallocated Guard Field: 0xFFFF 00:25:11.063 Flush: Supported 00:25:11.063 Reservation: Supported 00:25:11.063 Namespace Sharing Capabilities: Multiple Controllers 00:25:11.063 Size (in LBAs): 131072 (0GiB) 00:25:11.063 Capacity (in LBAs): 131072 (0GiB) 00:25:11.063 Utilization (in LBAs): 131072 (0GiB) 00:25:11.063 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:11.063 EUI64: ABCDEF0123456789 00:25:11.063 UUID: b1c24a89-3f29-402a-be4f-0978bcd8ac73 00:25:11.063 Thin Provisioning: Not Supported 00:25:11.063 Per-NS Atomic Units: Yes 00:25:11.063 Atomic Boundary Size (Normal): 0 00:25:11.063 Atomic Boundary Size (PFail): 0 00:25:11.063 Atomic Boundary Offset: 0 00:25:11.063 Maximum Single Source Range Length: 65535 00:25:11.063 Maximum Copy Length: 65535 00:25:11.063 Maximum Source Range Count: 1 00:25:11.063 NGUID/EUI64 Never Reused: No 00:25:11.063 Namespace Write Protected: No 00:25:11.063 Number of LBA Formats: 1 00:25:11.063 Current LBA Format: LBA Format #00 00:25:11.063 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:11.063 00:25:11.063 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:11.063 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:11.063 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.063 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:11.063 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.063 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:11.063 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:11.063 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:11.063 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:25:11.063 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:11.063 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:25:11.063 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:11.063 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:11.063 rmmod nvme_tcp 00:25:11.063 rmmod nvme_fabrics 00:25:11.063 rmmod nvme_keyring 00:25:11.324 16:19:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:11.324 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:25:11.324 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:25:11.324 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1377267 ']' 00:25:11.324 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1377267 00:25:11.324 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1377267 ']' 00:25:11.324 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1377267 00:25:11.324 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:25:11.324 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:11.324 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1377267 00:25:11.324 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:11.324 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:11.324 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1377267' 00:25:11.324 killing process with pid 1377267 00:25:11.324 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1377267 00:25:11.324 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1377267 00:25:11.583 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:11.583 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:11.583 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:11.583 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:25:11.583 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:25:11.583 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:11.583 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:25:11.583 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:11.583 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:11.583 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.583 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.583 16:19:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.490 16:19:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:13.490 00:25:13.490 real 0m11.755s 00:25:13.490 user 0m8.799s 00:25:13.490 sys 0m6.290s 00:25:13.490 16:19:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:13.490 16:19:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:13.490 ************************************ 00:25:13.490 END TEST nvmf_identify 00:25:13.490 ************************************ 00:25:13.490 16:19:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:13.490 16:19:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:13.490 16:19:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:13.490 16:19:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.751 ************************************ 00:25:13.751 START TEST nvmf_perf 00:25:13.751 ************************************ 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:13.751 * Looking for test storage... 00:25:13.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:13.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.751 --rc genhtml_branch_coverage=1 00:25:13.751 --rc genhtml_function_coverage=1 00:25:13.751 --rc genhtml_legend=1 00:25:13.751 --rc geninfo_all_blocks=1 00:25:13.751 --rc geninfo_unexecuted_blocks=1 00:25:13.751 00:25:13.751 ' 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:13.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.751 --rc genhtml_branch_coverage=1 00:25:13.751 --rc genhtml_function_coverage=1 00:25:13.751 --rc genhtml_legend=1 00:25:13.751 --rc geninfo_all_blocks=1 00:25:13.751 --rc geninfo_unexecuted_blocks=1 00:25:13.751 00:25:13.751 ' 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:13.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.751 --rc genhtml_branch_coverage=1 00:25:13.751 --rc genhtml_function_coverage=1 00:25:13.751 --rc genhtml_legend=1 00:25:13.751 --rc geninfo_all_blocks=1 00:25:13.751 --rc geninfo_unexecuted_blocks=1 00:25:13.751 00:25:13.751 ' 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:13.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.751 --rc genhtml_branch_coverage=1 00:25:13.751 --rc genhtml_function_coverage=1 00:25:13.751 --rc genhtml_legend=1 00:25:13.751 --rc geninfo_all_blocks=1 00:25:13.751 --rc geninfo_unexecuted_blocks=1 00:25:13.751 00:25:13.751 ' 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.751 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:13.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:13.752 16:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:21.893 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:21.893 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:21.893 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:21.894 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:21.894 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.894 16:19:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:21.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:25:21.894 00:25:21.894 --- 10.0.0.2 ping statistics --- 00:25:21.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.894 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:25:21.894 00:25:21.894 --- 10.0.0.1 ping statistics --- 00:25:21.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.894 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1381728 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1381728 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1381728 ']' 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:21.894 16:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:21.894 [2024-11-20 16:19:57.292173] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:25:21.894 [2024-11-20 16:19:57.292239] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.894 [2024-11-20 16:19:57.395052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:21.894 [2024-11-20 16:19:57.448055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.894 [2024-11-20 16:19:57.448106] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.894 [2024-11-20 16:19:57.448114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.894 [2024-11-20 16:19:57.448121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.894 [2024-11-20 16:19:57.448127] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.894 [2024-11-20 16:19:57.450553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.894 [2024-11-20 16:19:57.450714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.894 [2024-11-20 16:19:57.450874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.894 [2024-11-20 16:19:57.450874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:22.467 16:19:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:22.467 16:19:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:25:22.467 16:19:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:22.467 16:19:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:22.467 16:19:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:22.467 16:19:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.467 16:19:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:22.467 16:19:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:23.039 16:19:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:23.039 16:19:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:23.039 16:19:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:25:23.039 16:19:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:23.300 16:19:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:23.300 16:19:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:25:23.300 16:19:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:23.300 16:19:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:23.300 16:19:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:23.561 [2024-11-20 16:19:59.276261] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:23.561 16:19:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:23.821 16:19:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:23.821 16:19:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:23.821 16:19:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:23.821 16:19:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:24.082 16:19:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:24.341 [2024-11-20 16:20:00.043240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.341 16:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:24.341 16:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:24.341 16:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:24.341 16:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:24.341 16:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:25.722 Initializing NVMe Controllers 00:25:25.722 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:25:25.722 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:25:25.722 Initialization complete. Launching workers. 00:25:25.722 ======================================================== 00:25:25.722 Latency(us) 00:25:25.722 Device Information : IOPS MiB/s Average min max 00:25:25.722 PCIE (0000:65:00.0) NSID 1 from core 0: 77790.85 303.87 410.55 13.25 5359.52 00:25:25.722 ======================================================== 00:25:25.722 Total : 77790.85 303.87 410.55 13.25 5359.52 00:25:25.722 00:25:25.722 16:20:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:27.105 Initializing NVMe Controllers 00:25:27.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:27.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:27.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:27.105 Initialization complete. Launching workers. 00:25:27.105 ======================================================== 00:25:27.105 Latency(us) 00:25:27.105 Device Information : IOPS MiB/s Average min max 00:25:27.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 128.92 0.50 7897.85 269.15 45940.50 00:25:27.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.96 0.24 16911.13 4984.50 47907.12 00:25:27.105 ======================================================== 00:25:27.105 Total : 190.88 0.75 10823.63 269.15 47907.12 00:25:27.105 00:25:27.105 16:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:29.016 Initializing NVMe Controllers 00:25:29.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:29.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:29.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:29.016 Initialization complete. Launching workers. 00:25:29.016 ======================================================== 00:25:29.016 Latency(us) 00:25:29.016 Device Information : IOPS MiB/s Average min max 00:25:29.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11878.63 46.40 2693.75 347.18 6542.48 00:25:29.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3788.92 14.80 8497.49 5805.62 18606.06 00:25:29.016 ======================================================== 00:25:29.016 Total : 15667.55 61.20 4097.29 347.18 18606.06 00:25:29.016 00:25:29.016 16:20:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:29.016 16:20:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:29.016 16:20:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:30.992 Initializing NVMe Controllers 00:25:30.992 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:30.992 Controller IO queue size 128, less than required. 00:25:30.992 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:30.992 Controller IO queue size 128, less than required. 00:25:30.992 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:30.992 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:30.992 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:30.992 Initialization complete. Launching workers. 00:25:30.992 ======================================================== 00:25:30.992 Latency(us) 00:25:30.992 Device Information : IOPS MiB/s Average min max 00:25:30.992 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2079.43 519.86 62521.04 39812.06 99333.47 00:25:30.992 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 615.98 153.99 218245.20 73708.05 333590.73 00:25:30.992 ======================================================== 00:25:30.992 Total : 2695.41 673.85 98108.53 39812.06 333590.73 00:25:30.992 00:25:31.253 16:20:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:31.253 No valid NVMe controllers or AIO or URING devices found 00:25:31.253 Initializing NVMe Controllers 00:25:31.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:31.253 Controller IO queue size 128, less than required. 00:25:31.253 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:31.253 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:31.253 Controller IO queue size 128, less than required. 00:25:31.253 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:31.253 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:31.253 WARNING: Some requested NVMe devices were skipped 00:25:31.253 16:20:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:33.799 Initializing NVMe Controllers 00:25:33.799 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:33.799 Controller IO queue size 128, less than required. 00:25:33.799 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:33.799 Controller IO queue size 128, less than required. 00:25:33.799 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:33.799 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:33.799 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:33.799 Initialization complete. Launching workers. 00:25:33.799 00:25:33.799 ==================== 00:25:33.799 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:33.799 TCP transport: 00:25:33.799 polls: 27035 00:25:33.799 idle_polls: 11345 00:25:33.799 sock_completions: 15690 00:25:33.799 nvme_completions: 7889 00:25:33.799 submitted_requests: 11770 00:25:33.799 queued_requests: 1 00:25:33.799 00:25:33.799 ==================== 00:25:33.799 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:33.799 TCP transport: 00:25:33.799 polls: 28137 00:25:33.799 idle_polls: 15958 00:25:33.799 sock_completions: 12179 00:25:33.799 nvme_completions: 7547 00:25:33.799 submitted_requests: 11336 00:25:33.799 queued_requests: 1 00:25:33.799 ======================================================== 00:25:33.799 Latency(us) 00:25:33.799 Device Information : IOPS MiB/s Average min max 00:25:33.799 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1971.61 492.90 65820.21 33734.52 104241.14 00:25:33.799 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1886.13 471.53 68351.82 28673.48 109882.88 00:25:33.799 ======================================================== 00:25:33.799 Total : 3857.74 964.44 67057.97 28673.48 109882.88 00:25:33.799 00:25:33.799 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:33.799 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:34.060 rmmod nvme_tcp 00:25:34.060 rmmod nvme_fabrics 00:25:34.060 rmmod nvme_keyring 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1381728 ']' 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1381728 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1381728 ']' 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1381728 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1381728 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1381728' 00:25:34.060 killing process with pid 1381728 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1381728 00:25:34.060 16:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1381728 00:25:35.977 16:20:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:35.977 16:20:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:35.977 16:20:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:35.977 16:20:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:25:35.977 16:20:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:25:35.977 16:20:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:35.977 16:20:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:25:35.977 16:20:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:35.977 16:20:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:35.977 16:20:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.977 16:20:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.977 16:20:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.524 16:20:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:38.524 00:25:38.524 real 0m24.539s 00:25:38.524 user 0m59.435s 00:25:38.524 sys 0m8.619s 00:25:38.524 16:20:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:38.524 16:20:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:38.524 ************************************ 00:25:38.524 END TEST nvmf_perf 00:25:38.524 ************************************ 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.524 ************************************ 00:25:38.524 START TEST nvmf_fio_host 00:25:38.524 ************************************ 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:38.524 * Looking for test storage... 00:25:38.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:38.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.524 --rc genhtml_branch_coverage=1 00:25:38.524 --rc genhtml_function_coverage=1 00:25:38.524 --rc genhtml_legend=1 00:25:38.524 --rc geninfo_all_blocks=1 00:25:38.524 --rc geninfo_unexecuted_blocks=1 00:25:38.524 00:25:38.524 ' 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:38.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.524 --rc genhtml_branch_coverage=1 00:25:38.524 --rc genhtml_function_coverage=1 00:25:38.524 --rc genhtml_legend=1 00:25:38.524 --rc geninfo_all_blocks=1 00:25:38.524 --rc geninfo_unexecuted_blocks=1 00:25:38.524 00:25:38.524 ' 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:38.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.524 --rc genhtml_branch_coverage=1 00:25:38.524 --rc genhtml_function_coverage=1 00:25:38.524 --rc genhtml_legend=1 00:25:38.524 --rc geninfo_all_blocks=1 00:25:38.524 --rc geninfo_unexecuted_blocks=1 00:25:38.524 00:25:38.524 ' 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:38.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.524 --rc genhtml_branch_coverage=1 00:25:38.524 --rc genhtml_function_coverage=1 00:25:38.524 --rc genhtml_legend=1 00:25:38.524 --rc geninfo_all_blocks=1 00:25:38.524 --rc geninfo_unexecuted_blocks=1 00:25:38.524 00:25:38.524 ' 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.524 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:38.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:38.525 16:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:46.770 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:46.770 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:46.770 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:46.771 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:46.771 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:46.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:25:46.771 00:25:46.771 --- 10.0.0.2 ping statistics --- 00:25:46.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.771 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:46.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:46.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:25:46.771 00:25:46.771 --- 10.0.0.1 ping statistics --- 00:25:46.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.771 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1388799 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:46.771 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1388799 00:25:46.772 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1388799 ']' 00:25:46.772 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.772 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:46.772 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.772 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:46.772 16:20:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.772 [2024-11-20 16:20:21.883528] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:25:46.772 [2024-11-20 16:20:21.883594] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.772 [2024-11-20 16:20:21.983185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:46.772 [2024-11-20 16:20:22.035981] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.772 [2024-11-20 16:20:22.036033] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.772 [2024-11-20 16:20:22.036042] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:46.772 [2024-11-20 16:20:22.036049] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:46.772 [2024-11-20 16:20:22.036055] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.772 [2024-11-20 16:20:22.038084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.772 [2024-11-20 16:20:22.038247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:46.772 [2024-11-20 16:20:22.038298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.772 [2024-11-20 16:20:22.038299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:47.034 16:20:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:47.034 16:20:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:25:47.034 16:20:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:47.034 [2024-11-20 16:20:22.871380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.034 16:20:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:47.034 16:20:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:47.034 16:20:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.034 16:20:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:47.296 Malloc1 00:25:47.296 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:47.557 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:47.818 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:47.818 [2024-11-20 16:20:23.726673] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:48.080 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:48.080 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:48.080 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:48.080 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:48.080 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:48.080 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:48.080 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:48.080 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:48.080 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:48.080 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:48.080 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:48.080 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:48.080 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:48.080 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:48.080 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:48.080 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:48.080 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:48.080 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:48.080 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:48.080 16:20:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:48.348 16:20:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:48.348 16:20:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:48.348 16:20:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:48.348 16:20:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:48.609 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:48.609 fio-3.35 00:25:48.609 Starting 1 thread 00:25:51.171 00:25:51.171 test: (groupid=0, jobs=1): err= 0: pid=1389461: Wed Nov 20 16:20:26 2024 00:25:51.171 read: IOPS=9525, BW=37.2MiB/s (39.0MB/s)(74.6MiB/2006msec) 00:25:51.171 slat (usec): min=2, max=284, avg= 2.16, stdev= 2.62 00:25:51.171 clat (usec): min=3105, max=13173, avg=7397.81, stdev=539.52 00:25:51.171 lat (usec): min=3140, max=13175, avg=7399.97, stdev=539.31 00:25:51.171 clat percentiles (usec): 00:25:51.171 | 1.00th=[ 6194], 5.00th=[ 6587], 10.00th=[ 6783], 20.00th=[ 6980], 00:25:51.171 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7373], 60.00th=[ 7504], 00:25:51.171 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8029], 95.00th=[ 8225], 00:25:51.171 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[10945], 99.95th=[12387], 00:25:51.171 | 99.99th=[13173] 00:25:51.171 bw ( KiB/s): min=36184, max=39016, per=99.94%, avg=38080.00, stdev=1283.21, samples=4 00:25:51.171 iops : min= 9046, max= 9754, avg=9520.00, stdev=320.80, samples=4 00:25:51.171 write: IOPS=9536, BW=37.3MiB/s (39.1MB/s)(74.7MiB/2006msec); 0 zone resets 00:25:51.171 slat (usec): min=2, max=211, avg= 2.24, stdev= 1.70 00:25:51.171 clat (usec): min=2429, max=11598, avg=5936.47, stdev=456.56 00:25:51.171 lat (usec): min=2446, max=11600, avg=5938.71, stdev=456.43 00:25:51.171 clat percentiles (usec): 00:25:51.171 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5604], 00:25:51.171 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:25:51.171 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:25:51.171 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 9634], 99.95th=[10814], 00:25:51.171 | 99.99th=[11600] 00:25:51.171 bw ( KiB/s): min=37168, max=38712, per=100.00%, avg=38146.00, stdev=717.99, samples=4 00:25:51.171 iops : min= 9292, max= 9678, avg=9536.50, stdev=179.50, samples=4 00:25:51.171 lat (msec) : 4=0.12%, 10=99.75%, 20=0.13% 00:25:51.171 cpu : usr=71.47%, sys=27.43%, ctx=34, majf=0, minf=17 00:25:51.171 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:51.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:51.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:51.171 issued rwts: total=19109,19131,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:51.171 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:51.171 00:25:51.171 Run status group 0 (all jobs): 00:25:51.171 READ: bw=37.2MiB/s (39.0MB/s), 37.2MiB/s-37.2MiB/s (39.0MB/s-39.0MB/s), io=74.6MiB (78.3MB), run=2006-2006msec 00:25:51.171 WRITE: bw=37.3MiB/s (39.1MB/s), 37.3MiB/s-37.3MiB/s (39.1MB/s-39.1MB/s), io=74.7MiB (78.4MB), run=2006-2006msec 00:25:51.171 16:20:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:51.171 16:20:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:51.171 16:20:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:51.171 16:20:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:51.171 16:20:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:51.171 16:20:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:51.171 16:20:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:51.171 16:20:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:51.171 16:20:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:51.171 16:20:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:51.171 16:20:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:51.171 16:20:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:51.171 16:20:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:51.171 16:20:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:51.171 16:20:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:51.171 16:20:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:51.171 16:20:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:51.171 16:20:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:51.171 16:20:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:51.171 16:20:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:51.171 16:20:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:51.171 16:20:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:51.433 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:51.433 fio-3.35 00:25:51.433 Starting 1 thread 00:25:53.977 00:25:53.977 test: (groupid=0, jobs=1): err= 0: pid=1390155: Wed Nov 20 16:20:29 2024 00:25:53.978 read: IOPS=9555, BW=149MiB/s (157MB/s)(299MiB/2003msec) 00:25:53.978 slat (usec): min=3, max=110, avg= 3.60, stdev= 1.61 00:25:53.978 clat (usec): min=1560, max=15425, avg=8220.00, stdev=1964.95 00:25:53.978 lat (usec): min=1564, max=15428, avg=8223.60, stdev=1965.09 00:25:53.978 clat percentiles (usec): 00:25:53.978 | 1.00th=[ 4080], 5.00th=[ 5211], 10.00th=[ 5669], 20.00th=[ 6456], 00:25:53.978 | 30.00th=[ 6980], 40.00th=[ 7570], 50.00th=[ 8225], 60.00th=[ 8848], 00:25:53.978 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11338], 00:25:53.978 | 99.00th=[12780], 99.50th=[13304], 99.90th=[14484], 99.95th=[15008], 00:25:53.978 | 99.99th=[15401] 00:25:53.978 bw ( KiB/s): min=62944, max=86528, per=49.50%, avg=75688.00, stdev=9691.00, samples=4 00:25:53.978 iops : min= 3934, max= 5408, avg=4730.50, stdev=605.69, samples=4 00:25:53.978 write: IOPS=5793, BW=90.5MiB/s (94.9MB/s)(155MiB/1707msec); 0 zone resets 00:25:53.978 slat (usec): min=39, max=455, avg=41.07, stdev= 8.84 00:25:53.978 clat (usec): min=2292, max=16207, avg=9004.59, stdev=1473.52 00:25:53.978 lat (usec): min=2332, max=16340, avg=9045.66, stdev=1475.94 00:25:53.978 clat percentiles (usec): 00:25:53.978 | 1.00th=[ 5473], 5.00th=[ 6980], 10.00th=[ 7439], 20.00th=[ 7898], 00:25:53.978 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9241], 00:25:53.978 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[10814], 95.00th=[11469], 00:25:53.978 | 99.00th=[12780], 99.50th=[14222], 99.90th=[15926], 99.95th=[16057], 00:25:53.978 | 99.99th=[16188] 00:25:53.978 bw ( KiB/s): min=65248, max=89824, per=84.65%, avg=78464.00, stdev=10120.57, samples=4 00:25:53.978 iops : min= 4078, max= 5614, avg=4904.00, stdev=632.54, samples=4 00:25:53.978 lat (msec) : 2=0.02%, 4=0.75%, 10=78.98%, 20=20.25% 00:25:53.978 cpu : usr=85.22%, sys=13.53%, ctx=12, majf=0, minf=43 00:25:53.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:53.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.978 issued rwts: total=19140,9889,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.978 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.978 00:25:53.978 Run status group 0 (all jobs): 00:25:53.978 READ: bw=149MiB/s (157MB/s), 149MiB/s-149MiB/s (157MB/s-157MB/s), io=299MiB (314MB), run=2003-2003msec 00:25:53.978 WRITE: bw=90.5MiB/s (94.9MB/s), 90.5MiB/s-90.5MiB/s (94.9MB/s-94.9MB/s), io=155MiB (162MB), run=1707-1707msec 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:53.978 rmmod nvme_tcp 00:25:53.978 rmmod nvme_fabrics 00:25:53.978 rmmod nvme_keyring 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1388799 ']' 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1388799 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1388799 ']' 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1388799 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1388799 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1388799' 00:25:53.978 killing process with pid 1388799 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1388799 00:25:53.978 16:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1388799 00:25:54.238 16:20:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:54.238 16:20:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:54.238 16:20:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:54.238 16:20:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:54.238 16:20:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:54.238 16:20:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:54.238 16:20:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:54.238 16:20:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:54.238 16:20:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:54.238 16:20:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.238 16:20:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.238 16:20:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:56.781 00:25:56.781 real 0m18.032s 00:25:56.781 user 1m4.598s 00:25:56.781 sys 0m7.904s 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.781 ************************************ 00:25:56.781 END TEST nvmf_fio_host 00:25:56.781 ************************************ 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.781 ************************************ 00:25:56.781 START TEST nvmf_failover 00:25:56.781 ************************************ 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:56.781 * Looking for test storage... 00:25:56.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:56.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.781 --rc genhtml_branch_coverage=1 00:25:56.781 --rc genhtml_function_coverage=1 00:25:56.781 --rc genhtml_legend=1 00:25:56.781 --rc geninfo_all_blocks=1 00:25:56.781 --rc geninfo_unexecuted_blocks=1 00:25:56.781 00:25:56.781 ' 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:56.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.781 --rc genhtml_branch_coverage=1 00:25:56.781 --rc genhtml_function_coverage=1 00:25:56.781 --rc genhtml_legend=1 00:25:56.781 --rc geninfo_all_blocks=1 00:25:56.781 --rc geninfo_unexecuted_blocks=1 00:25:56.781 00:25:56.781 ' 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:56.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.781 --rc genhtml_branch_coverage=1 00:25:56.781 --rc genhtml_function_coverage=1 00:25:56.781 --rc genhtml_legend=1 00:25:56.781 --rc geninfo_all_blocks=1 00:25:56.781 --rc geninfo_unexecuted_blocks=1 00:25:56.781 00:25:56.781 ' 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:56.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.781 --rc genhtml_branch_coverage=1 00:25:56.781 --rc genhtml_function_coverage=1 00:25:56.781 --rc genhtml_legend=1 00:25:56.781 --rc geninfo_all_blocks=1 00:25:56.781 --rc geninfo_unexecuted_blocks=1 00:25:56.781 00:25:56.781 ' 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:56.781 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:56.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:56.782 16:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:04.924 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:04.924 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:04.924 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:04.924 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.924 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:04.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:04.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:26:04.925 00:26:04.925 --- 10.0.0.2 ping statistics --- 00:26:04.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.925 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:04.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:04.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:26:04.925 00:26:04.925 --- 10.0.0.1 ping statistics --- 00:26:04.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.925 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1394815 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1394815 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1394815 ']' 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:04.925 16:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:04.925 [2024-11-20 16:20:39.962363] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:26:04.925 [2024-11-20 16:20:39.962432] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:04.925 [2024-11-20 16:20:40.063374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:04.925 [2024-11-20 16:20:40.116667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:04.925 [2024-11-20 16:20:40.116720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:04.925 [2024-11-20 16:20:40.116729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:04.925 [2024-11-20 16:20:40.116737] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:04.925 [2024-11-20 16:20:40.116743] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:04.925 [2024-11-20 16:20:40.118494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:04.925 [2024-11-20 16:20:40.118646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.925 [2024-11-20 16:20:40.118646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:04.925 16:20:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:04.925 16:20:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:04.925 16:20:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:04.925 16:20:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:04.925 16:20:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:04.925 16:20:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:04.925 16:20:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:05.186 [2024-11-20 16:20:41.006871] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:05.186 16:20:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:05.447 Malloc0 00:26:05.447 16:20:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:05.708 16:20:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:05.969 16:20:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:05.969 [2024-11-20 16:20:41.824357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.969 16:20:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:06.230 [2024-11-20 16:20:42.021003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:06.230 16:20:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:06.491 [2024-11-20 16:20:42.221762] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:06.491 16:20:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1395204 00:26:06.491 16:20:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:06.491 16:20:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:06.491 16:20:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1395204 /var/tmp/bdevperf.sock 00:26:06.491 16:20:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1395204 ']' 00:26:06.491 16:20:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:06.491 16:20:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:06.491 16:20:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:06.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:06.491 16:20:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:06.491 16:20:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:07.434 16:20:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:07.434 16:20:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:07.434 16:20:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:07.434 NVMe0n1 00:26:07.695 16:20:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:07.955 00:26:07.955 16:20:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:07.955 16:20:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1395524 00:26:07.955 16:20:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:08.896 16:20:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:09.157 [2024-11-20 16:20:44.838457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 [2024-11-20 16:20:44.838637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307ed0 is same with the state(6) to be set 00:26:09.157 16:20:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:12.458 16:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:12.458 00:26:12.458 16:20:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:12.719 [2024-11-20 16:20:48.414121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.719 [2024-11-20 16:20:48.414164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.719 [2024-11-20 16:20:48.414179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.719 [2024-11-20 16:20:48.414184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.719 [2024-11-20 16:20:48.414188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.719 [2024-11-20 16:20:48.414193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.719 [2024-11-20 16:20:48.414198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.719 [2024-11-20 16:20:48.414202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.719 [2024-11-20 16:20:48.414207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.719 [2024-11-20 16:20:48.414212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.719 [2024-11-20 16:20:48.414216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.719 [2024-11-20 16:20:48.414221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.719 [2024-11-20 16:20:48.414225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.719 [2024-11-20 16:20:48.414230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 [2024-11-20 16:20:48.414536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308cf0 is same with the state(6) to be set 00:26:12.720 16:20:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:16.023 16:20:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:16.023 [2024-11-20 16:20:51.603285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:16.023 16:20:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:16.965 16:20:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:16.965 [2024-11-20 16:20:52.798015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309bf0 is same with the state(6) to be set 00:26:16.965 [2024-11-20 16:20:52.798054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309bf0 is same with the state(6) to be set 00:26:16.965 [2024-11-20 16:20:52.798060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309bf0 is same with the state(6) to be set 00:26:16.965 [2024-11-20 16:20:52.798065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309bf0 is same with the state(6) to be set 00:26:16.965 [2024-11-20 16:20:52.798070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309bf0 is same with the state(6) to be set 00:26:16.965 [2024-11-20 16:20:52.798075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309bf0 is same with the state(6) to be set 00:26:16.965 [2024-11-20 16:20:52.798080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309bf0 is same with the state(6) to be set 00:26:16.965 [2024-11-20 16:20:52.798084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309bf0 is same with the state(6) to be set 00:26:16.965 [2024-11-20 16:20:52.798089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309bf0 is same with the state(6) to be set 00:26:16.965 [2024-11-20 16:20:52.798093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309bf0 is same with the state(6) to be set 00:26:16.965 [2024-11-20 16:20:52.798098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309bf0 is same with the state(6) to be set 00:26:16.965 [2024-11-20 16:20:52.798102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309bf0 is same with the state(6) to be set 00:26:16.965 [2024-11-20 16:20:52.798107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309bf0 is same with the state(6) to be set 00:26:16.965 [2024-11-20 16:20:52.798111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309bf0 is same with the state(6) to be set 00:26:16.965 [2024-11-20 16:20:52.798116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309bf0 is same with the state(6) to be set 00:26:16.965 16:20:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1395524 00:26:23.556 { 00:26:23.556 "results": [ 00:26:23.556 { 00:26:23.556 "job": "NVMe0n1", 00:26:23.556 "core_mask": "0x1", 00:26:23.556 "workload": "verify", 00:26:23.556 "status": "finished", 00:26:23.556 "verify_range": { 00:26:23.556 "start": 0, 00:26:23.556 "length": 16384 00:26:23.556 }, 00:26:23.556 "queue_depth": 128, 00:26:23.556 "io_size": 4096, 00:26:23.556 "runtime": 15.008242, 00:26:23.556 "iops": 12412.246550928483, 00:26:23.556 "mibps": 48.48533808956439, 00:26:23.556 "io_failed": 8541, 00:26:23.556 "io_timeout": 0, 00:26:23.556 "avg_latency_us": 9839.47558918083, 00:26:23.556 "min_latency_us": 539.3066666666666, 00:26:23.556 "max_latency_us": 21954.56 00:26:23.556 } 00:26:23.556 ], 00:26:23.556 "core_count": 1 00:26:23.556 } 00:26:23.556 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1395204 00:26:23.556 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1395204 ']' 00:26:23.556 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1395204 00:26:23.556 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:23.556 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:23.556 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1395204 00:26:23.556 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:23.556 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:23.556 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1395204' 00:26:23.556 killing process with pid 1395204 00:26:23.556 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1395204 00:26:23.556 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1395204 00:26:23.556 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:23.557 [2024-11-20 16:20:42.307785] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:26:23.557 [2024-11-20 16:20:42.307862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1395204 ] 00:26:23.557 [2024-11-20 16:20:42.400696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.557 [2024-11-20 16:20:42.453671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.557 Running I/O for 15 seconds... 00:26:23.557 11218.00 IOPS, 43.82 MiB/s [2024-11-20T15:20:59.493Z] [2024-11-20 16:20:44.839773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.557 [2024-11-20 16:20:44.839806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.839817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.557 [2024-11-20 16:20:44.839825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.839834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.557 [2024-11-20 16:20:44.839842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.839850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.557 [2024-11-20 16:20:44.839858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.839865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfd70 is same with the state(6) to be set 00:26:23.557 [2024-11-20 16:20:44.839921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.839932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.839945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.839954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.839964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.839971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.839981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.839988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.839998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.557 [2024-11-20 16:20:44.840497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.557 [2024-11-20 16:20:44.840505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 16:20:44.840522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 16:20:44.840540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 16:20:44.840557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 16:20:44.840574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 16:20:44.840591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 16:20:44.840608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.558 [2024-11-20 16:20:44.840625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.840642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.840659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.840675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.840692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.840712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.840729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.840746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.840763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.840780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.840796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.840815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.840832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.840848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.840865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.840883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.840902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.840921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.840943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.840963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.840981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.840991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.841000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.841011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.841020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.841030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.841039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.841049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.841056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.841066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.841074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.841083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.841091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.841101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.841109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.841119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.841126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.841136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.841143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.841152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.841165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.841175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.841182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.558 [2024-11-20 16:20:44.841192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.558 [2024-11-20 16:20:44.841199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.559 [2024-11-20 16:20:44.841693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.559 [2024-11-20 16:20:44.841710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.559 [2024-11-20 16:20:44.841727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.559 [2024-11-20 16:20:44.841745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.559 [2024-11-20 16:20:44.841762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.559 [2024-11-20 16:20:44.841778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.559 [2024-11-20 16:20:44.841795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.559 [2024-11-20 16:20:44.841814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.559 [2024-11-20 16:20:44.841831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.559 [2024-11-20 16:20:44.841848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.559 [2024-11-20 16:20:44.841858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.559 [2024-11-20 16:20:44.841865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:44.841874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:44.841882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:44.841891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:44.841899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:44.841909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:44.841916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:44.841926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:44.841933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:44.841942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:44.841949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:44.841959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:44.841967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:44.841976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:44.841984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:44.841994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:44.842001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:44.842011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:44.842019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:44.842030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:44.842037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:44.842047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:44.842054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:44.842064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:44.842072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:44.842082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:44.842089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:44.842099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:44.842106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:44.842116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:44.842124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:44.842133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:44.842141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:44.842163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:23.560 [2024-11-20 16:20:44.842170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:23.560 [2024-11-20 16:20:44.842178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99256 len:8 PRP1 0x0 PRP2 0x0 00:26:23.560 [2024-11-20 16:20:44.842186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:44.842225] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:23.560 [2024-11-20 16:20:44.842236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:23.560 [2024-11-20 16:20:44.845774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:23.560 [2024-11-20 16:20:44.845798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbfd70 (9): Bad file descriptor 00:26:23.560 [2024-11-20 16:20:44.884995] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:23.560 11189.50 IOPS, 43.71 MiB/s [2024-11-20T15:20:59.496Z] 11148.67 IOPS, 43.55 MiB/s [2024-11-20T15:20:59.496Z] 11498.75 IOPS, 44.92 MiB/s [2024-11-20T15:20:59.496Z] [2024-11-20 16:20:48.415902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:48.415933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:48.415944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:48.415956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:48.415964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:48.415969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:48.415975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:48.415981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:48.415987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:48.415992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:48.415999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:48.416005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:48.416012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:48.416017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:48.416023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:48.416028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:48.416035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:48.416040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:48.416047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:48.416053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:48.416059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:48.416065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:48.416071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:48.416076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:48.416083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:48.416088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:48.416094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:48.416100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:48.416108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:48.416113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:48.416120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:48.416125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:48.416132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:48.416137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:48.416144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:48.416149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:48.416156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.560 [2024-11-20 16:20:48.416166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.560 [2024-11-20 16:20:48.416173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.561 [2024-11-20 16:20:48.416226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.561 [2024-11-20 16:20:48.416238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.561 [2024-11-20 16:20:48.416250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.561 [2024-11-20 16:20:48.416263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.561 [2024-11-20 16:20:48.416275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.561 [2024-11-20 16:20:48.416287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.561 [2024-11-20 16:20:48.416298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.561 [2024-11-20 16:20:48.416578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.561 [2024-11-20 16:20:48.416585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.562 [2024-11-20 16:20:48.416590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.562 [2024-11-20 16:20:48.416602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.562 [2024-11-20 16:20:48.416614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.562 [2024-11-20 16:20:48.416626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.562 [2024-11-20 16:20:48.416637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:55168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.562 [2024-11-20 16:20:48.416650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.562 [2024-11-20 16:20:48.416662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:55184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.562 [2024-11-20 16:20:48.416674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.562 [2024-11-20 16:20:48.416685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.562 [2024-11-20 16:20:48.416697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.562 [2024-11-20 16:20:48.416709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:55328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.416992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.416997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.417004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.417009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.417016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.417021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.417028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.417033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.417040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.562 [2024-11-20 16:20:48.417045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.562 [2024-11-20 16:20:48.417051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.563 [2024-11-20 16:20:48.417056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.563 [2024-11-20 16:20:48.417068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.563 [2024-11-20 16:20:48.417081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.563 [2024-11-20 16:20:48.417092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.563 [2024-11-20 16:20:48.417104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.563 [2024-11-20 16:20:48.417117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.563 [2024-11-20 16:20:48.417129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.563 [2024-11-20 16:20:48.417140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.563 [2024-11-20 16:20:48.417152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.563 [2024-11-20 16:20:48.417166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.563 [2024-11-20 16:20:48.417180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.563 [2024-11-20 16:20:48.417192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.563 [2024-11-20 16:20:48.417203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.563 [2024-11-20 16:20:48.417215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.563 [2024-11-20 16:20:48.417230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.563 [2024-11-20 16:20:48.417242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.563 [2024-11-20 16:20:48.417254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.563 [2024-11-20 16:20:48.417265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.563 [2024-11-20 16:20:48.417277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.563 [2024-11-20 16:20:48.417289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:23.563 [2024-11-20 16:20:48.417311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55656 len:8 PRP1 0x0 PRP2 0x0 00:26:23.563 [2024-11-20 16:20:48.417316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:23.563 [2024-11-20 16:20:48.417328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:23.563 [2024-11-20 16:20:48.417333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55664 len:8 PRP1 0x0 PRP2 0x0 00:26:23.563 [2024-11-20 16:20:48.417338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:23.563 [2024-11-20 16:20:48.417349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:23.563 [2024-11-20 16:20:48.417353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55672 len:8 PRP1 0x0 PRP2 0x0 00:26:23.563 [2024-11-20 16:20:48.417358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:23.563 [2024-11-20 16:20:48.417368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:23.563 [2024-11-20 16:20:48.417372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55680 len:8 PRP1 0x0 PRP2 0x0 00:26:23.563 [2024-11-20 16:20:48.417377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:23.563 [2024-11-20 16:20:48.417387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:23.563 [2024-11-20 16:20:48.417392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55688 len:8 PRP1 0x0 PRP2 0x0 00:26:23.563 [2024-11-20 16:20:48.417397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:23.563 [2024-11-20 16:20:48.417406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:23.563 [2024-11-20 16:20:48.417410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55696 len:8 PRP1 0x0 PRP2 0x0 00:26:23.563 [2024-11-20 16:20:48.417415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:23.563 [2024-11-20 16:20:48.417425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:23.563 [2024-11-20 16:20:48.417429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55704 len:8 PRP1 0x0 PRP2 0x0 00:26:23.563 [2024-11-20 16:20:48.417434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:23.563 [2024-11-20 16:20:48.417444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:23.563 [2024-11-20 16:20:48.417448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55712 len:8 PRP1 0x0 PRP2 0x0 00:26:23.563 [2024-11-20 16:20:48.417453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:23.563 [2024-11-20 16:20:48.417463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:23.563 [2024-11-20 16:20:48.417467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55720 len:8 PRP1 0x0 PRP2 0x0 00:26:23.563 [2024-11-20 16:20:48.417473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:23.563 [2024-11-20 16:20:48.417482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:23.563 [2024-11-20 16:20:48.417486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55728 len:8 PRP1 0x0 PRP2 0x0 00:26:23.563 [2024-11-20 16:20:48.417494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:23.563 [2024-11-20 16:20:48.417504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:23.563 [2024-11-20 16:20:48.417508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55736 len:8 PRP1 0x0 PRP2 0x0 00:26:23.563 [2024-11-20 16:20:48.417513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:23.563 [2024-11-20 16:20:48.417523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:23.563 [2024-11-20 16:20:48.417527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55744 len:8 PRP1 0x0 PRP2 0x0 00:26:23.563 [2024-11-20 16:20:48.417532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.563 [2024-11-20 16:20:48.417537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:23.563 [2024-11-20 16:20:48.417541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:23.564 [2024-11-20 16:20:48.417547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55752 len:8 PRP1 0x0 PRP2 0x0 00:26:23.564 [2024-11-20 16:20:48.417552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:48.417557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:23.564 [2024-11-20 16:20:48.417562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:23.564 [2024-11-20 16:20:48.417566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55760 len:8 PRP1 0x0 PRP2 0x0 00:26:23.564 [2024-11-20 16:20:48.417571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:48.430531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:23.564 [2024-11-20 16:20:48.430558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:23.564 [2024-11-20 16:20:48.430568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55768 len:8 PRP1 0x0 PRP2 0x0 00:26:23.564 [2024-11-20 16:20:48.430577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:48.430584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:23.564 [2024-11-20 16:20:48.430589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:23.564 [2024-11-20 16:20:48.430595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55776 len:8 PRP1 0x0 PRP2 0x0 00:26:23.564 [2024-11-20 16:20:48.430603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:48.430645] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:23.564 [2024-11-20 16:20:48.430674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.564 [2024-11-20 16:20:48.430683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:48.430692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.564 [2024-11-20 16:20:48.430698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:48.430712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.564 [2024-11-20 16:20:48.430720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:48.430727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.564 [2024-11-20 16:20:48.430734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:48.430741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:23.564 [2024-11-20 16:20:48.430780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbfd70 (9): Bad file descriptor 00:26:23.564 [2024-11-20 16:20:48.434055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:23.564 [2024-11-20 16:20:48.497416] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:26:23.564 11575.00 IOPS, 45.21 MiB/s [2024-11-20T15:20:59.500Z] 11802.67 IOPS, 46.10 MiB/s [2024-11-20T15:20:59.500Z] 11969.00 IOPS, 46.75 MiB/s [2024-11-20T15:20:59.500Z] 12089.25 IOPS, 47.22 MiB/s [2024-11-20T15:20:59.500Z] 12172.00 IOPS, 47.55 MiB/s [2024-11-20T15:20:59.500Z] [2024-11-20 16:20:52.798185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.564 [2024-11-20 16:20:52.798217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.564 [2024-11-20 16:20:52.798236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.564 [2024-11-20 16:20:52.798249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.564 [2024-11-20 16:20:52.798261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.564 [2024-11-20 16:20:52.798274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.564 [2024-11-20 16:20:52.798286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.564 [2024-11-20 16:20:52.798297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.564 [2024-11-20 16:20:52.798309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.564 [2024-11-20 16:20:52.798326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.564 [2024-11-20 16:20:52.798339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.564 [2024-11-20 16:20:52.798351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.564 [2024-11-20 16:20:52.798363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.564 [2024-11-20 16:20:52.798374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.564 [2024-11-20 16:20:52.798386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.564 [2024-11-20 16:20:52.798398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.564 [2024-11-20 16:20:52.798410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.564 [2024-11-20 16:20:52.798421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.564 [2024-11-20 16:20:52.798433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.564 [2024-11-20 16:20:52.798445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.564 [2024-11-20 16:20:52.798456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.564 [2024-11-20 16:20:52.798468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.564 [2024-11-20 16:20:52.798481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.564 [2024-11-20 16:20:52.798493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.564 [2024-11-20 16:20:52.798504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.564 [2024-11-20 16:20:52.798516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.564 [2024-11-20 16:20:52.798527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.564 [2024-11-20 16:20:52.798533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.565 [2024-11-20 16:20:52.798712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.565 [2024-11-20 16:20:52.798725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.565 [2024-11-20 16:20:52.798739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.565 [2024-11-20 16:20:52.798753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.565 [2024-11-20 16:20:52.798767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.565 [2024-11-20 16:20:52.798779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.565 [2024-11-20 16:20:52.798796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.565 [2024-11-20 16:20:52.798808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.565 [2024-11-20 16:20:52.798956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.565 [2024-11-20 16:20:52.798961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.798968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.566 [2024-11-20 16:20:52.798973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.798980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.566 [2024-11-20 16:20:52.798985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.798992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.566 [2024-11-20 16:20:52.798997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.566 [2024-11-20 16:20:52.799009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.566 [2024-11-20 16:20:52.799020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.566 [2024-11-20 16:20:52.799032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.566 [2024-11-20 16:20:52.799044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.566 [2024-11-20 16:20:52.799057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.566 [2024-11-20 16:20:52.799069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.566 [2024-11-20 16:20:52.799082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.566 [2024-11-20 16:20:52.799093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.566 [2024-11-20 16:20:52.799107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.566 [2024-11-20 16:20:52.799119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.566 [2024-11-20 16:20:52.799134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.566 [2024-11-20 16:20:52.799145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.566 [2024-11-20 16:20:52.799425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.566 [2024-11-20 16:20:52.799438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.566 [2024-11-20 16:20:52.799444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.567 [2024-11-20 16:20:52.799450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.567 [2024-11-20 16:20:52.799461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.567 [2024-11-20 16:20:52.799473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.567 [2024-11-20 16:20:52.799485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.567 [2024-11-20 16:20:52.799497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.567 [2024-11-20 16:20:52.799509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.567 [2024-11-20 16:20:52.799522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.567 [2024-11-20 16:20:52.799534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.567 [2024-11-20 16:20:52.799546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.567 [2024-11-20 16:20:52.799557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.567 [2024-11-20 16:20:52.799570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.567 [2024-11-20 16:20:52.799582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.567 [2024-11-20 16:20:52.799594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.567 [2024-11-20 16:20:52.799606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.567 [2024-11-20 16:20:52.799617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.567 [2024-11-20 16:20:52.799630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.567 [2024-11-20 16:20:52.799642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.567 [2024-11-20 16:20:52.799654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.567 [2024-11-20 16:20:52.799665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.567 [2024-11-20 16:20:52.799677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.567 [2024-11-20 16:20:52.799688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.567 [2024-11-20 16:20:52.799700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.567 [2024-11-20 16:20:52.799714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.567 [2024-11-20 16:20:52.799725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.567 [2024-11-20 16:20:52.799736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.567 [2024-11-20 16:20:52.799749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.567 [2024-11-20 16:20:52.799762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:23.567 [2024-11-20 16:20:52.799788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:23.567 [2024-11-20 16:20:52.799795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:760 len:8 PRP1 0x0 PRP2 0x0 00:26:23.567 [2024-11-20 16:20:52.799801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799838] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:23.567 [2024-11-20 16:20:52.799856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.567 [2024-11-20 16:20:52.799864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.567 [2024-11-20 16:20:52.799876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.567 [2024-11-20 16:20:52.799890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.567 [2024-11-20 16:20:52.799903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.567 [2024-11-20 16:20:52.799911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:23.567 [2024-11-20 16:20:52.802344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:23.567 [2024-11-20 16:20:52.802365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbfd70 (9): Bad file descriptor 00:26:23.567 [2024-11-20 16:20:52.866177] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:26:23.567 12179.50 IOPS, 47.58 MiB/s [2024-11-20T15:20:59.503Z] 12265.36 IOPS, 47.91 MiB/s [2024-11-20T15:20:59.503Z] 12312.58 IOPS, 48.10 MiB/s [2024-11-20T15:20:59.503Z] 12343.23 IOPS, 48.22 MiB/s [2024-11-20T15:20:59.503Z] 12379.36 IOPS, 48.36 MiB/s 00:26:23.567 Latency(us) 00:26:23.567 [2024-11-20T15:20:59.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.567 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:23.567 Verification LBA range: start 0x0 length 0x4000 00:26:23.567 NVMe0n1 : 15.01 12412.25 48.49 569.09 0.00 9839.48 539.31 21954.56 00:26:23.567 [2024-11-20T15:20:59.503Z] =================================================================================================================== 00:26:23.567 [2024-11-20T15:20:59.503Z] Total : 12412.25 48.49 569.09 0.00 9839.48 539.31 21954.56 00:26:23.567 Received shutdown signal, test time was about 15.000000 seconds 00:26:23.567 00:26:23.567 Latency(us) 00:26:23.567 [2024-11-20T15:20:59.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.567 [2024-11-20T15:20:59.503Z] =================================================================================================================== 00:26:23.567 [2024-11-20T15:20:59.503Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:23.568 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:23.568 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:23.568 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:23.568 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1398533 00:26:23.568 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1398533 /var/tmp/bdevperf.sock 00:26:23.568 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:23.568 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1398533 ']' 00:26:23.568 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:23.568 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:23.568 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:23.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:23.568 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:23.568 16:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:24.138 16:20:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:24.138 16:20:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:24.138 16:20:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:24.138 [2024-11-20 16:21:00.001878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:24.138 16:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:24.398 [2024-11-20 16:21:00.186373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:24.398 16:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:24.659 NVMe0n1 00:26:24.659 16:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:24.919 00:26:24.919 16:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:25.178 00:26:25.438 16:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:25.438 16:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:25.438 16:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:25.697 16:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:28.993 16:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:28.993 16:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:28.993 16:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1399658 00:26:28.993 16:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:28.993 16:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1399658 00:26:29.937 { 00:26:29.937 "results": [ 00:26:29.937 { 00:26:29.937 "job": "NVMe0n1", 00:26:29.937 "core_mask": "0x1", 00:26:29.937 "workload": "verify", 00:26:29.937 "status": "finished", 00:26:29.937 "verify_range": { 00:26:29.937 "start": 0, 00:26:29.937 "length": 16384 00:26:29.937 }, 00:26:29.937 "queue_depth": 128, 00:26:29.937 "io_size": 4096, 00:26:29.937 "runtime": 1.006216, 00:26:29.937 "iops": 12786.518997908997, 00:26:29.937 "mibps": 49.94733983558202, 00:26:29.937 "io_failed": 0, 00:26:29.937 "io_timeout": 0, 00:26:29.937 "avg_latency_us": 9974.441287113323, 00:26:29.937 "min_latency_us": 2088.96, 00:26:29.937 "max_latency_us": 11141.12 00:26:29.937 } 00:26:29.937 ], 00:26:29.937 "core_count": 1 00:26:29.937 } 00:26:29.937 16:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:29.937 [2024-11-20 16:20:59.046877] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:26:29.937 [2024-11-20 16:20:59.046934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1398533 ] 00:26:29.937 [2024-11-20 16:20:59.129722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.937 [2024-11-20 16:20:59.157850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.937 [2024-11-20 16:21:01.459650] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:29.937 [2024-11-20 16:21:01.459689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.937 [2024-11-20 16:21:01.459698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.937 [2024-11-20 16:21:01.459705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.937 [2024-11-20 16:21:01.459711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.937 [2024-11-20 16:21:01.459717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.937 [2024-11-20 16:21:01.459722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.937 [2024-11-20 16:21:01.459728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.938 [2024-11-20 16:21:01.459733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.938 [2024-11-20 16:21:01.459739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:26:29.938 [2024-11-20 16:21:01.459759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:26:29.938 [2024-11-20 16:21:01.459770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14aed70 (9): Bad file descriptor 00:26:29.938 [2024-11-20 16:21:01.471596] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:26:29.938 Running I/O for 1 seconds... 00:26:29.938 12738.00 IOPS, 49.76 MiB/s 00:26:29.938 Latency(us) 00:26:29.938 [2024-11-20T15:21:05.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.938 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:29.938 Verification LBA range: start 0x0 length 0x4000 00:26:29.938 NVMe0n1 : 1.01 12786.52 49.95 0.00 0.00 9974.44 2088.96 11141.12 00:26:29.938 [2024-11-20T15:21:05.874Z] =================================================================================================================== 00:26:29.938 [2024-11-20T15:21:05.874Z] Total : 12786.52 49.95 0.00 0.00 9974.44 2088.96 11141.12 00:26:29.938 16:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:29.938 16:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:30.199 16:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:30.460 16:21:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:30.460 16:21:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:30.460 16:21:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:30.720 16:21:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:34.020 16:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:34.020 16:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:34.020 16:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1398533 00:26:34.020 16:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1398533 ']' 00:26:34.020 16:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1398533 00:26:34.020 16:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:34.020 16:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:34.020 16:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1398533 00:26:34.020 16:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:34.020 16:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:34.020 16:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1398533' 00:26:34.020 killing process with pid 1398533 00:26:34.020 16:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1398533 00:26:34.020 16:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1398533 00:26:34.020 16:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:34.020 16:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:34.281 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:34.281 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:34.281 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:34.281 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:34.281 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:26:34.281 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:34.281 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:26:34.281 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:34.281 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:34.281 rmmod nvme_tcp 00:26:34.281 rmmod nvme_fabrics 00:26:34.281 rmmod nvme_keyring 00:26:34.281 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:34.281 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:26:34.281 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:26:34.281 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1394815 ']' 00:26:34.281 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1394815 00:26:34.281 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1394815 ']' 00:26:34.281 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1394815 00:26:34.281 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:34.281 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:34.281 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1394815 00:26:34.542 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:34.542 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:34.542 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1394815' 00:26:34.542 killing process with pid 1394815 00:26:34.542 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1394815 00:26:34.542 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1394815 00:26:34.542 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:34.542 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:34.542 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:34.542 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:26:34.542 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:26:34.542 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:34.542 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:26:34.542 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:34.542 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:34.542 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.542 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:34.542 16:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:37.091 00:26:37.091 real 0m40.244s 00:26:37.091 user 2m3.660s 00:26:37.091 sys 0m8.744s 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:37.091 ************************************ 00:26:37.091 END TEST nvmf_failover 00:26:37.091 ************************************ 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.091 ************************************ 00:26:37.091 START TEST nvmf_host_discovery 00:26:37.091 ************************************ 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:37.091 * Looking for test storage... 00:26:37.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:37.091 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:37.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.092 --rc genhtml_branch_coverage=1 00:26:37.092 --rc genhtml_function_coverage=1 00:26:37.092 --rc genhtml_legend=1 00:26:37.092 --rc geninfo_all_blocks=1 00:26:37.092 --rc geninfo_unexecuted_blocks=1 00:26:37.092 00:26:37.092 ' 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:37.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.092 --rc genhtml_branch_coverage=1 00:26:37.092 --rc genhtml_function_coverage=1 00:26:37.092 --rc genhtml_legend=1 00:26:37.092 --rc geninfo_all_blocks=1 00:26:37.092 --rc geninfo_unexecuted_blocks=1 00:26:37.092 00:26:37.092 ' 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:37.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.092 --rc genhtml_branch_coverage=1 00:26:37.092 --rc genhtml_function_coverage=1 00:26:37.092 --rc genhtml_legend=1 00:26:37.092 --rc geninfo_all_blocks=1 00:26:37.092 --rc geninfo_unexecuted_blocks=1 00:26:37.092 00:26:37.092 ' 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:37.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.092 --rc genhtml_branch_coverage=1 00:26:37.092 --rc genhtml_function_coverage=1 00:26:37.092 --rc genhtml_legend=1 00:26:37.092 --rc geninfo_all_blocks=1 00:26:37.092 --rc geninfo_unexecuted_blocks=1 00:26:37.092 00:26:37.092 ' 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:37.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:26:37.092 16:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:45.239 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:45.239 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:45.239 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:45.240 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:45.240 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:45.240 16:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:45.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:45.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:26:45.240 00:26:45.240 --- 10.0.0.2 ping statistics --- 00:26:45.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.240 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:45.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:45.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:26:45.240 00:26:45.240 --- 10.0.0.1 ping statistics --- 00:26:45.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.240 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1405453 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1405453 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1405453 ']' 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:45.240 16:21:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.240 [2024-11-20 16:21:20.348727] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:26:45.240 [2024-11-20 16:21:20.348798] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.240 [2024-11-20 16:21:20.448934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.240 [2024-11-20 16:21:20.500282] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.240 [2024-11-20 16:21:20.500326] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.240 [2024-11-20 16:21:20.500335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:45.240 [2024-11-20 16:21:20.500342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:45.240 [2024-11-20 16:21:20.500348] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.240 [2024-11-20 16:21:20.501020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.240 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:45.240 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:45.240 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:45.240 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:45.240 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.503 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.503 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:45.503 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.503 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.503 [2024-11-20 16:21:21.202761] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.503 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.503 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:45.503 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.503 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.503 [2024-11-20 16:21:21.215025] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:45.503 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.503 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:45.504 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.504 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.504 null0 00:26:45.504 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.504 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:45.504 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.504 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.504 null1 00:26:45.504 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.504 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:45.504 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.504 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.504 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.504 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1405589 00:26:45.504 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:45.504 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1405589 /tmp/host.sock 00:26:45.504 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1405589 ']' 00:26:45.504 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:45.504 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:45.504 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:45.504 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:45.504 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:45.504 16:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.504 [2024-11-20 16:21:21.322024] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:26:45.504 [2024-11-20 16:21:21.322104] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405589 ] 00:26:45.504 [2024-11-20 16:21:21.414088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.765 [2024-11-20 16:21:21.467146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.337 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:46.337 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:46.337 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:46.337 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:46.337 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.337 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.337 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.337 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:46.337 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.337 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.337 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.337 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:46.337 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:46.337 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:46.337 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:46.337 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.337 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:46.337 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.337 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:46.337 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.338 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:46.338 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:46.338 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.338 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:46.338 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.338 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:46.338 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.338 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:46.338 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.338 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:46.338 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:46.338 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.338 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.599 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.599 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:46.599 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:46.599 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:46.599 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.599 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.599 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:46.599 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:46.599 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.599 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:46.599 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:46.599 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:46.599 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.599 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:46.599 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.599 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.599 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:46.599 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.599 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:46.599 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.600 [2024-11-20 16:21:22.498334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:46.600 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.862 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:46.862 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:46.862 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.862 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.862 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:46.862 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.862 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:46.862 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:46.862 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.862 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:46.862 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:46.862 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:46.862 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:46.862 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:46.862 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:26:46.863 16:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:47.531 [2024-11-20 16:21:23.212756] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:47.531 [2024-11-20 16:21:23.212788] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:47.532 [2024-11-20 16:21:23.212803] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:47.532 [2024-11-20 16:21:23.301057] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:47.833 [2024-11-20 16:21:23.523632] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:47.834 [2024-11-20 16:21:23.524709] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1ba77a0:1 started. 00:26:47.834 [2024-11-20 16:21:23.526524] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:47.834 [2024-11-20 16:21:23.526542] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:47.834 [2024-11-20 16:21:23.530426] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1ba77a0 was disconnected and freed. delete nvme_qpair. 00:26:47.834 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:47.834 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:47.834 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:47.834 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:47.834 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:47.834 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.834 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:47.834 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.834 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:47.834 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.121 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:48.122 [2024-11-20 16:21:23.938173] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1b76120:1 started. 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:48.122 [2024-11-20 16:21:23.941505] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1b76120 was disconnected and freed. delete nvme_qpair. 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.122 16:21:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.122 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.122 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:48.122 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:48.122 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:48.122 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:48.122 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:48.122 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.122 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.122 [2024-11-20 16:21:24.042637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:48.122 [2024-11-20 16:21:24.043313] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:48.122 [2024-11-20 16:21:24.043334] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:48.122 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.122 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:48.122 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:48.122 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:48.122 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:48.122 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:48.122 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:48.122 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:48.122 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:48.122 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.122 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.122 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:48.384 [2024-11-20 16:21:24.129593] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:48.384 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:48.385 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.385 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.385 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:48.385 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:48.385 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:48.385 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.385 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:48.385 16:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:48.385 [2024-11-20 16:21:24.236519] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:26:48.385 [2024-11-20 16:21:24.236556] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:48.385 [2024-11-20 16:21:24.236564] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:48.385 [2024-11-20 16:21:24.236569] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:49.328 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:49.328 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:49.328 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:49.328 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:49.328 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:49.328 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.328 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:49.328 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.328 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:49.328 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.328 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:49.328 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:49.589 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:49.589 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:49.589 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.590 [2024-11-20 16:21:25.318048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.590 [2024-11-20 16:21:25.318074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.590 [2024-11-20 16:21:25.318085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.590 [2024-11-20 16:21:25.318093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.590 [2024-11-20 16:21:25.318101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.590 [2024-11-20 16:21:25.318109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.590 [2024-11-20 16:21:25.318117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.590 [2024-11-20 16:21:25.318124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.590 [2024-11-20 16:21:25.318132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b77e10 is same with the state(6) to be set 00:26:49.590 [2024-11-20 16:21:25.318562] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:49.590 [2024-11-20 16:21:25.318578] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:49.590 [2024-11-20 16:21:25.328050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b77e10 (9): Bad file descriptor 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.590 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:49.590 [2024-11-20 16:21:25.338084] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:49.590 [2024-11-20 16:21:25.338099] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:49.590 [2024-11-20 16:21:25.338104] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:49.590 [2024-11-20 16:21:25.338109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:49.590 [2024-11-20 16:21:25.338127] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:49.590 [2024-11-20 16:21:25.338406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.590 [2024-11-20 16:21:25.338422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b77e10 with addr=10.0.0.2, port=4420 00:26:49.590 [2024-11-20 16:21:25.338431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b77e10 is same with the state(6) to be set 00:26:49.590 [2024-11-20 16:21:25.338443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b77e10 (9): Bad file descriptor 00:26:49.591 [2024-11-20 16:21:25.338454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:49.591 [2024-11-20 16:21:25.338461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:49.591 [2024-11-20 16:21:25.338469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:49.591 [2024-11-20 16:21:25.338476] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:49.591 [2024-11-20 16:21:25.338481] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:49.591 [2024-11-20 16:21:25.338486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:49.591 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.591 [2024-11-20 16:21:25.348162] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:49.591 [2024-11-20 16:21:25.348174] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:49.591 [2024-11-20 16:21:25.348179] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:49.591 [2024-11-20 16:21:25.348183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:49.591 [2024-11-20 16:21:25.348198] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:49.591 [2024-11-20 16:21:25.348518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.591 [2024-11-20 16:21:25.348532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b77e10 with addr=10.0.0.2, port=4420 00:26:49.591 [2024-11-20 16:21:25.348540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b77e10 is same with the state(6) to be set 00:26:49.591 [2024-11-20 16:21:25.348551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b77e10 (9): Bad file descriptor 00:26:49.591 [2024-11-20 16:21:25.348562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:49.591 [2024-11-20 16:21:25.348569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:49.591 [2024-11-20 16:21:25.348577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:49.591 [2024-11-20 16:21:25.348582] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:49.591 [2024-11-20 16:21:25.348587] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:49.591 [2024-11-20 16:21:25.348592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:49.591 [2024-11-20 16:21:25.358226] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:49.591 [2024-11-20 16:21:25.358234] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:49.591 [2024-11-20 16:21:25.358237] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:49.591 [2024-11-20 16:21:25.358242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:49.591 [2024-11-20 16:21:25.358254] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:49.591 [2024-11-20 16:21:25.358546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.591 [2024-11-20 16:21:25.358555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b77e10 with addr=10.0.0.2, port=4420 00:26:49.591 [2024-11-20 16:21:25.358560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b77e10 is same with the state(6) to be set 00:26:49.591 [2024-11-20 16:21:25.358568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b77e10 (9): Bad file descriptor 00:26:49.591 [2024-11-20 16:21:25.358576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:49.591 [2024-11-20 16:21:25.358580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:49.591 [2024-11-20 16:21:25.358585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:49.591 [2024-11-20 16:21:25.358590] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:49.591 [2024-11-20 16:21:25.358593] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:49.591 [2024-11-20 16:21:25.358596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:49.591 [2024-11-20 16:21:25.368283] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:49.591 [2024-11-20 16:21:25.368292] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:49.591 [2024-11-20 16:21:25.368296] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:49.591 [2024-11-20 16:21:25.368299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:49.591 [2024-11-20 16:21:25.368309] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:49.591 [2024-11-20 16:21:25.368596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.591 [2024-11-20 16:21:25.368606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b77e10 with addr=10.0.0.2, port=4420 00:26:49.591 [2024-11-20 16:21:25.368611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b77e10 is same with the state(6) to be set 00:26:49.591 [2024-11-20 16:21:25.368619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b77e10 (9): Bad file descriptor 00:26:49.591 [2024-11-20 16:21:25.368627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:49.591 [2024-11-20 16:21:25.368631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:49.591 [2024-11-20 16:21:25.368637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:49.591 [2024-11-20 16:21:25.368642] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:49.591 [2024-11-20 16:21:25.368645] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:49.591 [2024-11-20 16:21:25.368648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:49.591 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.591 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:49.591 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:49.591 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:49.591 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:49.591 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:49.591 [2024-11-20 16:21:25.378338] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:49.591 [2024-11-20 16:21:25.378348] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:49.591 [2024-11-20 16:21:25.378351] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:49.591 [2024-11-20 16:21:25.378354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:49.591 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:49.591 [2024-11-20 16:21:25.378365] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:49.591 [2024-11-20 16:21:25.378657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.591 [2024-11-20 16:21:25.378667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b77e10 with addr=10.0.0.2, port=4420 00:26:49.591 [2024-11-20 16:21:25.378672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b77e10 is same with the state(6) to be set 00:26:49.591 [2024-11-20 16:21:25.378680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b77e10 (9): Bad file descriptor 00:26:49.591 [2024-11-20 16:21:25.378688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:49.591 [2024-11-20 16:21:25.378692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:49.591 [2024-11-20 16:21:25.378697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:49.591 [2024-11-20 16:21:25.378702] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:49.592 [2024-11-20 16:21:25.378706] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:49.592 [2024-11-20 16:21:25.378709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:49.592 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:49.592 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.592 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:49.592 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.592 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:49.592 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.592 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:49.592 [2024-11-20 16:21:25.388394] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:49.592 [2024-11-20 16:21:25.388404] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:49.592 [2024-11-20 16:21:25.388407] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:49.592 [2024-11-20 16:21:25.388410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:49.592 [2024-11-20 16:21:25.388421] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:49.592 [2024-11-20 16:21:25.388750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.592 [2024-11-20 16:21:25.388761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b77e10 with addr=10.0.0.2, port=4420 00:26:49.592 [2024-11-20 16:21:25.388768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b77e10 is same with the state(6) to be set 00:26:49.592 [2024-11-20 16:21:25.388776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b77e10 (9): Bad file descriptor 00:26:49.592 [2024-11-20 16:21:25.388789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:49.592 [2024-11-20 16:21:25.388794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:49.592 [2024-11-20 16:21:25.388800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:49.592 [2024-11-20 16:21:25.388804] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:49.592 [2024-11-20 16:21:25.388808] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:49.592 [2024-11-20 16:21:25.388810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:49.592 [2024-11-20 16:21:25.398451] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:49.592 [2024-11-20 16:21:25.398459] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:49.592 [2024-11-20 16:21:25.398462] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:49.592 [2024-11-20 16:21:25.398465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:49.592 [2024-11-20 16:21:25.398476] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:49.592 [2024-11-20 16:21:25.398803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.592 [2024-11-20 16:21:25.398812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b77e10 with addr=10.0.0.2, port=4420 00:26:49.592 [2024-11-20 16:21:25.398817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b77e10 is same with the state(6) to be set 00:26:49.592 [2024-11-20 16:21:25.398826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b77e10 (9): Bad file descriptor 00:26:49.592 [2024-11-20 16:21:25.398838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:49.592 [2024-11-20 16:21:25.398843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:49.592 [2024-11-20 16:21:25.398849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:49.592 [2024-11-20 16:21:25.398853] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:49.592 [2024-11-20 16:21:25.398856] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:49.592 [2024-11-20 16:21:25.398859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:49.592 [2024-11-20 16:21:25.408505] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:49.592 [2024-11-20 16:21:25.408514] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:49.592 [2024-11-20 16:21:25.408517] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:49.592 [2024-11-20 16:21:25.408520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:49.592 [2024-11-20 16:21:25.408534] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:49.592 [2024-11-20 16:21:25.408858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.592 [2024-11-20 16:21:25.408868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b77e10 with addr=10.0.0.2, port=4420 00:26:49.592 [2024-11-20 16:21:25.408874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b77e10 is same with the state(6) to be set 00:26:49.592 [2024-11-20 16:21:25.408882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b77e10 (9): Bad file descriptor 00:26:49.592 [2024-11-20 16:21:25.408894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:49.592 [2024-11-20 16:21:25.408900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:49.592 [2024-11-20 16:21:25.408905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:49.592 [2024-11-20 16:21:25.408909] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:49.592 [2024-11-20 16:21:25.408913] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:49.592 [2024-11-20 16:21:25.408916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:49.592 [2024-11-20 16:21:25.418562] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:49.592 [2024-11-20 16:21:25.418570] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:49.593 [2024-11-20 16:21:25.418573] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:49.593 [2024-11-20 16:21:25.418576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:49.593 [2024-11-20 16:21:25.418586] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:49.593 [2024-11-20 16:21:25.418898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.593 [2024-11-20 16:21:25.418907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b77e10 with addr=10.0.0.2, port=4420 00:26:49.593 [2024-11-20 16:21:25.418912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b77e10 is same with the state(6) to be set 00:26:49.593 [2024-11-20 16:21:25.418920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b77e10 (9): Bad file descriptor 00:26:49.593 [2024-11-20 16:21:25.418932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:49.593 [2024-11-20 16:21:25.418936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:49.593 [2024-11-20 16:21:25.418941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:49.593 [2024-11-20 16:21:25.418945] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:49.593 [2024-11-20 16:21:25.418949] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:49.593 [2024-11-20 16:21:25.418951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:49.593 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.593 [2024-11-20 16:21:25.428615] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:49.593 [2024-11-20 16:21:25.428624] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:49.593 [2024-11-20 16:21:25.428627] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:49.593 [2024-11-20 16:21:25.428635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:49.593 [2024-11-20 16:21:25.428645] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:49.593 [2024-11-20 16:21:25.428927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.593 [2024-11-20 16:21:25.428936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b77e10 with addr=10.0.0.2, port=4420 00:26:49.593 [2024-11-20 16:21:25.428941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b77e10 is same with the state(6) to be set 00:26:49.593 [2024-11-20 16:21:25.428949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b77e10 (9): Bad file descriptor 00:26:49.593 [2024-11-20 16:21:25.428956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:49.593 [2024-11-20 16:21:25.428961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:49.593 [2024-11-20 16:21:25.428966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:49.593 [2024-11-20 16:21:25.428970] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:49.593 [2024-11-20 16:21:25.428974] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:49.593 [2024-11-20 16:21:25.428977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:49.593 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:49.593 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:49.593 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:49.593 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:49.593 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:49.593 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:49.593 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:49.593 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:49.593 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:49.593 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.593 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.593 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:49.593 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:49.593 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:49.593 [2024-11-20 16:21:25.438675] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:49.593 [2024-11-20 16:21:25.438683] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:49.593 [2024-11-20 16:21:25.438686] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:49.593 [2024-11-20 16:21:25.438689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:49.593 [2024-11-20 16:21:25.438700] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:49.593 [2024-11-20 16:21:25.438890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.593 [2024-11-20 16:21:25.438898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b77e10 with addr=10.0.0.2, port=4420 00:26:49.593 [2024-11-20 16:21:25.438905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b77e10 is same with the state(6) to be set 00:26:49.593 [2024-11-20 16:21:25.438913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b77e10 (9): Bad file descriptor 00:26:49.593 [2024-11-20 16:21:25.438920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:49.593 [2024-11-20 16:21:25.438925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:49.593 [2024-11-20 16:21:25.438930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:49.593 [2024-11-20 16:21:25.438935] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:49.593 [2024-11-20 16:21:25.438939] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:49.593 [2024-11-20 16:21:25.438942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:49.593 [2024-11-20 16:21:25.446020] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:49.593 [2024-11-20 16:21:25.446033] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:49.593 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.593 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:26:49.593 16:21:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.975 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.976 16:21:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.917 [2024-11-20 16:21:27.813099] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:51.917 [2024-11-20 16:21:27.813113] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:51.917 [2024-11-20 16:21:27.813121] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:52.177 [2024-11-20 16:21:27.901386] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:52.177 [2024-11-20 16:21:27.965104] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:52.177 [2024-11-20 16:21:27.965786] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1bb3600:1 started. 00:26:52.177 [2024-11-20 16:21:27.967128] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:52.177 [2024-11-20 16:21:27.967152] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:52.177 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.177 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:52.177 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:52.177 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:52.177 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:52.177 [2024-11-20 16:21:27.971976] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1bb3600 was disconnected and freed. delete nvme_qpair. 00:26:52.177 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:52.177 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:52.177 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:52.177 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:52.178 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.178 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.178 request: 00:26:52.178 { 00:26:52.178 "name": "nvme", 00:26:52.178 "trtype": "tcp", 00:26:52.178 "traddr": "10.0.0.2", 00:26:52.178 "adrfam": "ipv4", 00:26:52.178 "trsvcid": "8009", 00:26:52.178 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:52.178 "wait_for_attach": true, 00:26:52.178 "method": "bdev_nvme_start_discovery", 00:26:52.178 "req_id": 1 00:26:52.178 } 00:26:52.178 Got JSON-RPC error response 00:26:52.178 response: 00:26:52.178 { 00:26:52.178 "code": -17, 00:26:52.178 "message": "File exists" 00:26:52.178 } 00:26:52.178 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:52.178 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:52.178 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:52.178 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:52.178 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:52.178 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:52.178 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:52.178 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:52.178 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.178 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:52.178 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.178 16:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.178 request: 00:26:52.178 { 00:26:52.178 "name": "nvme_second", 00:26:52.178 "trtype": "tcp", 00:26:52.178 "traddr": "10.0.0.2", 00:26:52.178 "adrfam": "ipv4", 00:26:52.178 "trsvcid": "8009", 00:26:52.178 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:52.178 "wait_for_attach": true, 00:26:52.178 "method": "bdev_nvme_start_discovery", 00:26:52.178 "req_id": 1 00:26:52.178 } 00:26:52.178 Got JSON-RPC error response 00:26:52.178 response: 00:26:52.178 { 00:26:52.178 "code": -17, 00:26:52.178 "message": "File exists" 00:26:52.178 } 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:52.178 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:52.438 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:52.438 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:52.438 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:52.438 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.438 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:52.438 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.438 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:52.438 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.438 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:52.438 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:52.438 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.438 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:52.439 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.439 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:52.439 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.439 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:52.439 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.439 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:52.439 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:52.439 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:52.439 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:52.439 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:52.439 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:52.439 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:52.439 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:52.439 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:52.439 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.439 16:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.379 [2024-11-20 16:21:29.227889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.379 [2024-11-20 16:21:29.227914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd5180 with addr=10.0.0.2, port=8010 00:26:53.379 [2024-11-20 16:21:29.227925] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:53.379 [2024-11-20 16:21:29.227931] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:53.379 [2024-11-20 16:21:29.227936] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:54.320 [2024-11-20 16:21:30.229994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.320 [2024-11-20 16:21:30.230016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd5180 with addr=10.0.0.2, port=8010 00:26:54.320 [2024-11-20 16:21:30.230026] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:54.320 [2024-11-20 16:21:30.230031] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:54.320 [2024-11-20 16:21:30.230036] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:55.705 [2024-11-20 16:21:31.232237] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:55.705 request: 00:26:55.705 { 00:26:55.705 "name": "nvme_second", 00:26:55.705 "trtype": "tcp", 00:26:55.705 "traddr": "10.0.0.2", 00:26:55.705 "adrfam": "ipv4", 00:26:55.705 "trsvcid": "8010", 00:26:55.705 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:55.705 "wait_for_attach": false, 00:26:55.705 "attach_timeout_ms": 3000, 00:26:55.705 "method": "bdev_nvme_start_discovery", 00:26:55.705 "req_id": 1 00:26:55.705 } 00:26:55.705 Got JSON-RPC error response 00:26:55.705 response: 00:26:55.705 { 00:26:55.705 "code": -110, 00:26:55.705 "message": "Connection timed out" 00:26:55.705 } 00:26:55.705 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:55.705 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:55.705 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:55.705 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:55.705 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:55.705 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:55.705 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1405589 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:55.706 rmmod nvme_tcp 00:26:55.706 rmmod nvme_fabrics 00:26:55.706 rmmod nvme_keyring 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1405453 ']' 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1405453 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1405453 ']' 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1405453 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1405453 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1405453' 00:26:55.706 killing process with pid 1405453 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1405453 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1405453 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.706 16:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:58.249 00:26:58.249 real 0m21.118s 00:26:58.249 user 0m25.163s 00:26:58.249 sys 0m7.297s 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.249 ************************************ 00:26:58.249 END TEST nvmf_host_discovery 00:26:58.249 ************************************ 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.249 ************************************ 00:26:58.249 START TEST nvmf_host_multipath_status 00:26:58.249 ************************************ 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:58.249 * Looking for test storage... 00:26:58.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:58.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.249 --rc genhtml_branch_coverage=1 00:26:58.249 --rc genhtml_function_coverage=1 00:26:58.249 --rc genhtml_legend=1 00:26:58.249 --rc geninfo_all_blocks=1 00:26:58.249 --rc geninfo_unexecuted_blocks=1 00:26:58.249 00:26:58.249 ' 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:58.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.249 --rc genhtml_branch_coverage=1 00:26:58.249 --rc genhtml_function_coverage=1 00:26:58.249 --rc genhtml_legend=1 00:26:58.249 --rc geninfo_all_blocks=1 00:26:58.249 --rc geninfo_unexecuted_blocks=1 00:26:58.249 00:26:58.249 ' 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:58.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.249 --rc genhtml_branch_coverage=1 00:26:58.249 --rc genhtml_function_coverage=1 00:26:58.249 --rc genhtml_legend=1 00:26:58.249 --rc geninfo_all_blocks=1 00:26:58.249 --rc geninfo_unexecuted_blocks=1 00:26:58.249 00:26:58.249 ' 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:58.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.249 --rc genhtml_branch_coverage=1 00:26:58.249 --rc genhtml_function_coverage=1 00:26:58.249 --rc genhtml_legend=1 00:26:58.249 --rc geninfo_all_blocks=1 00:26:58.249 --rc geninfo_unexecuted_blocks=1 00:26:58.249 00:26:58.249 ' 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.249 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:58.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:58.250 16:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:06.392 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:06.392 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:27:06.392 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:06.393 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:06.393 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:06.393 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:06.393 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:06.393 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:06.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.491 ms 00:27:06.394 00:27:06.394 --- 10.0.0.2 ping statistics --- 00:27:06.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.394 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:06.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:27:06.394 00:27:06.394 --- 10.0.0.1 ping statistics --- 00:27:06.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.394 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1412001 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1412001 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1412001 ']' 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:06.394 16:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:06.394 [2024-11-20 16:21:41.483409] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:27:06.394 [2024-11-20 16:21:41.483478] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.394 [2024-11-20 16:21:41.585030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:06.394 [2024-11-20 16:21:41.636624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.394 [2024-11-20 16:21:41.636676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.394 [2024-11-20 16:21:41.636685] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.394 [2024-11-20 16:21:41.636692] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.394 [2024-11-20 16:21:41.636699] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.394 [2024-11-20 16:21:41.638295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.394 [2024-11-20 16:21:41.638324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.394 16:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:06.394 16:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:06.394 16:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:06.394 16:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:06.394 16:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:06.656 16:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.656 16:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1412001 00:27:06.656 16:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:06.656 [2024-11-20 16:21:42.521882] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.656 16:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:06.917 Malloc0 00:27:06.917 16:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:07.179 16:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:07.440 16:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:07.440 [2024-11-20 16:21:43.344376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.702 16:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:07.702 [2024-11-20 16:21:43.536868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:07.702 16:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1412367 00:27:07.702 16:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:07.702 16:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:07.702 16:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1412367 /var/tmp/bdevperf.sock 00:27:07.702 16:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1412367 ']' 00:27:07.702 16:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:07.702 16:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:07.702 16:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:07.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:07.702 16:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:07.702 16:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:08.645 16:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:08.645 16:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:08.645 16:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:08.906 16:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:09.166 Nvme0n1 00:27:09.426 16:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:09.686 Nvme0n1 00:27:09.686 16:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:09.686 16:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:11.601 16:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:11.601 16:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:11.861 16:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:12.122 16:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:13.063 16:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:13.063 16:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:13.063 16:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.063 16:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:13.324 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.324 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:13.324 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.324 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:13.324 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:13.324 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:13.324 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.324 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:13.585 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.585 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:13.585 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.585 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:13.846 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.846 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:13.846 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.846 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:14.107 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.107 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:14.107 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.107 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:14.107 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.107 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:14.107 16:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:14.367 16:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:14.630 16:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:15.572 16:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:15.572 16:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:15.572 16:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.572 16:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:15.572 16:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:15.572 16:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:15.572 16:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.572 16:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:15.833 16:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.833 16:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:15.833 16:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.833 16:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:16.094 16:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.094 16:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:16.094 16:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.094 16:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:16.353 16:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.353 16:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:16.353 16:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.354 16:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:16.354 16:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.354 16:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:16.354 16:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:16.354 16:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.613 16:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.613 16:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:16.613 16:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:16.874 16:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:16.874 16:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:18.255 16:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:18.255 16:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:18.255 16:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.255 16:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:18.255 16:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.255 16:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:18.255 16:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.255 16:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:18.255 16:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:18.255 16:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:18.255 16:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.255 16:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:18.515 16:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.515 16:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:18.515 16:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:18.515 16:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.776 16:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.776 16:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:18.776 16:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.776 16:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:19.037 16:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:19.037 16:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:19.037 16:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.037 16:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:19.037 16:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:19.037 16:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:19.037 16:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:19.298 16:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:19.559 16:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:20.499 16:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:20.499 16:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:20.499 16:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.499 16:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:20.760 16:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.760 16:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:20.760 16:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.760 16:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:20.760 16:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:20.760 16:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:20.760 16:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.761 16:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:21.023 16:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.023 16:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:21.023 16:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.023 16:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:21.284 16:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.284 16:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:21.284 16:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.284 16:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:21.544 16:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.544 16:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:21.544 16:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.544 16:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:21.544 16:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:21.544 16:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:21.544 16:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:21.804 16:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:22.065 16:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:23.008 16:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:23.008 16:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:23.008 16:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.008 16:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:23.268 16:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:23.268 16:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:23.268 16:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.268 16:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:23.268 16:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:23.268 16:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:23.268 16:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.268 16:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:23.528 16:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.528 16:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:23.528 16:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.528 16:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:23.789 16:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.789 16:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:23.789 16:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.789 16:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:23.789 16:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:23.789 16:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:23.789 16:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.789 16:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:24.051 16:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:24.051 16:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:24.051 16:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:24.312 16:22:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:24.312 16:22:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:25.696 16:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:25.696 16:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:25.696 16:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.696 16:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:25.696 16:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:25.696 16:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:25.696 16:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.696 16:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:25.957 16:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.957 16:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:25.957 16:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.957 16:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:25.957 16:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.957 16:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:25.957 16:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.957 16:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:26.217 16:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.217 16:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:26.217 16:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.217 16:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:26.478 16:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:26.478 16:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:26.478 16:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.478 16:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:26.478 16:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.478 16:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:26.738 16:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:26.738 16:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:26.998 16:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:26.998 16:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:28.382 16:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:28.382 16:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:28.382 16:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.382 16:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:28.382 16:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.382 16:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:28.382 16:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.382 16:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:28.382 16:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.382 16:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:28.382 16:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.382 16:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:28.643 16:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.643 16:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:28.643 16:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.643 16:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:28.903 16:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.903 16:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:28.903 16:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.903 16:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:29.163 16:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:29.163 16:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:29.163 16:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:29.163 16:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:29.163 16:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:29.163 16:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:29.163 16:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:29.423 16:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:29.684 16:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:30.624 16:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:30.624 16:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:30.624 16:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.624 16:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:30.884 16:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:30.884 16:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:30.884 16:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.884 16:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:30.884 16:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.884 16:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:30.884 16:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.884 16:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:31.145 16:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.145 16:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:31.145 16:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.145 16:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:31.406 16:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.406 16:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:31.406 16:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.406 16:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:31.666 16:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.666 16:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:31.666 16:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.666 16:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:31.666 16:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.666 16:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:31.666 16:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:31.927 16:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:32.186 16:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:33.129 16:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:33.129 16:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:33.129 16:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.129 16:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:33.389 16:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.389 16:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:33.389 16:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.389 16:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:33.389 16:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.389 16:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:33.389 16:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:33.389 16:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.650 16:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.650 16:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:33.650 16:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.650 16:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:33.911 16:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.911 16:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:33.911 16:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.911 16:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:33.911 16:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.911 16:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:33.911 16:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.911 16:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:34.171 16:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:34.171 16:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:34.171 16:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:34.432 16:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:34.432 16:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:35.483 16:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:35.483 16:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:35.483 16:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:35.483 16:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.744 16:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:35.744 16:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:35.744 16:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.744 16:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:36.004 16:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:36.004 16:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:36.004 16:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.004 16:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:36.004 16:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:36.004 16:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:36.004 16:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.004 16:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:36.264 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:36.265 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:36.265 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.265 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:36.525 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:36.525 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:36.525 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.525 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:36.525 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:36.808 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1412367 00:27:36.808 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1412367 ']' 00:27:36.808 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1412367 00:27:36.808 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:36.808 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.808 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1412367 00:27:36.808 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:36.808 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:36.808 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1412367' 00:27:36.808 killing process with pid 1412367 00:27:36.808 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1412367 00:27:36.808 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1412367 00:27:36.808 { 00:27:36.808 "results": [ 00:27:36.808 { 00:27:36.808 "job": "Nvme0n1", 00:27:36.808 "core_mask": "0x4", 00:27:36.808 "workload": "verify", 00:27:36.808 "status": "terminated", 00:27:36.808 "verify_range": { 00:27:36.808 "start": 0, 00:27:36.808 "length": 16384 00:27:36.808 }, 00:27:36.808 "queue_depth": 128, 00:27:36.808 "io_size": 4096, 00:27:36.808 "runtime": 26.912141, 00:27:36.808 "iops": 11750.644439622994, 00:27:36.808 "mibps": 45.90095484227732, 00:27:36.808 "io_failed": 0, 00:27:36.808 "io_timeout": 0, 00:27:36.808 "avg_latency_us": 10874.0844802125, 00:27:36.808 "min_latency_us": 856.7466666666667, 00:27:36.808 "max_latency_us": 3453310.2933333335 00:27:36.808 } 00:27:36.808 ], 00:27:36.808 "core_count": 1 00:27:36.808 } 00:27:36.808 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1412367 00:27:36.808 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:36.808 [2024-11-20 16:21:43.633080] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:27:36.808 [2024-11-20 16:21:43.633191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412367 ] 00:27:36.808 [2024-11-20 16:21:43.727351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.808 [2024-11-20 16:21:43.777418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:36.808 Running I/O for 90 seconds... 00:27:36.808 10521.00 IOPS, 41.10 MiB/s [2024-11-20T15:22:12.744Z] 10927.00 IOPS, 42.68 MiB/s [2024-11-20T15:22:12.744Z] 11024.67 IOPS, 43.07 MiB/s [2024-11-20T15:22:12.744Z] 11411.50 IOPS, 44.58 MiB/s [2024-11-20T15:22:12.744Z] 11756.60 IOPS, 45.92 MiB/s [2024-11-20T15:22:12.744Z] 11940.33 IOPS, 46.64 MiB/s [2024-11-20T15:22:12.744Z] 12067.29 IOPS, 47.14 MiB/s [2024-11-20T15:22:12.744Z] 12165.12 IOPS, 47.52 MiB/s [2024-11-20T15:22:12.744Z] 12259.33 IOPS, 47.89 MiB/s [2024-11-20T15:22:12.744Z] 12337.50 IOPS, 48.19 MiB/s [2024-11-20T15:22:12.744Z] 12393.82 IOPS, 48.41 MiB/s [2024-11-20T15:22:12.744Z] [2024-11-20 16:21:57.584081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:36.808 [2024-11-20 16:21:57.584131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:36.808 [2024-11-20 16:21:57.584148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:36.808 [2024-11-20 16:21:57.584167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:36.808 [2024-11-20 16:21:57.584183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:36.808 [2024-11-20 16:21:57.584199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:36.808 [2024-11-20 16:21:57.584214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:36.808 [2024-11-20 16:21:57.584230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:36.808 [2024-11-20 16:21:57.584245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:36.808 [2024-11-20 16:21:57.584261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:36.808 [2024-11-20 16:21:57.584281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:36.808 [2024-11-20 16:21:57.584297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:36.808 [2024-11-20 16:21:57.584313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:36.808 [2024-11-20 16:21:57.584328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:36.808 12430.25 IOPS, 48.56 MiB/s [2024-11-20T15:22:12.744Z] [2024-11-20 16:21:57.584684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:36.808 [2024-11-20 16:21:57.584704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.808 [2024-11-20 16:21:57.584720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:36.808 [2024-11-20 16:21:57.584736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:36.808 [2024-11-20 16:21:57.584753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:36.808 [2024-11-20 16:21:57.584769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:36.808 [2024-11-20 16:21:57.584786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:36.808 [2024-11-20 16:21:57.584803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:36.808 [2024-11-20 16:21:57.584822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.808 [2024-11-20 16:21:57.584828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.584838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.584843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.584853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.584859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.584869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.584874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.584885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.584890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.584900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.584905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.584915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.584920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.584930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.584936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.584946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.584951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.584961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.584967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.584977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.584983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.584994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.809 [2024-11-20 16:21:57.585426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:36.809 [2024-11-20 16:21:57.585436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.810 [2024-11-20 16:21:57.585441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.585451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.810 [2024-11-20 16:21:57.585457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.585468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.810 [2024-11-20 16:21:57.585473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.585484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.810 [2024-11-20 16:21:57.585489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.585500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.810 [2024-11-20 16:21:57.585505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.585516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.810 [2024-11-20 16:21:57.585521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.585531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.810 [2024-11-20 16:21:57.585537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.585547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.810 [2024-11-20 16:21:57.585553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.585563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.810 [2024-11-20 16:21:57.585568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.585579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.810 [2024-11-20 16:21:57.585589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.585600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.810 [2024-11-20 16:21:57.585605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.585615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.810 [2024-11-20 16:21:57.585621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.585632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.810 [2024-11-20 16:21:57.585638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.585648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.810 [2024-11-20 16:21:57.585653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.585663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.810 [2024-11-20 16:21:57.585668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.585679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.810 [2024-11-20 16:21:57.585684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.585695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.810 [2024-11-20 16:21:57.585700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.585710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.810 [2024-11-20 16:21:57.585715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.586247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.810 [2024-11-20 16:21:57.586258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.586270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.810 [2024-11-20 16:21:57.586275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.586286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.810 [2024-11-20 16:21:57.586291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.586302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.810 [2024-11-20 16:21:57.586307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.586320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.810 [2024-11-20 16:21:57.586326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.586336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.810 [2024-11-20 16:21:57.586342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.586352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.810 [2024-11-20 16:21:57.586358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.586368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.810 [2024-11-20 16:21:57.586373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.586384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.810 [2024-11-20 16:21:57.586389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.586399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.810 [2024-11-20 16:21:57.586404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.586415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.810 [2024-11-20 16:21:57.586421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.586431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.810 [2024-11-20 16:21:57.586436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.586446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.810 [2024-11-20 16:21:57.586451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.586462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.810 [2024-11-20 16:21:57.586467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.586477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.810 [2024-11-20 16:21:57.586482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.586493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.810 [2024-11-20 16:21:57.586498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.586510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.810 [2024-11-20 16:21:57.586515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.586526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.810 [2024-11-20 16:21:57.586531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.586543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.810 [2024-11-20 16:21:57.586549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.586559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.810 [2024-11-20 16:21:57.586564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.586575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.810 [2024-11-20 16:21:57.586580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:36.810 [2024-11-20 16:21:57.586591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.811 [2024-11-20 16:21:57.586643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.586986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.586991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.587002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.587006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.587017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.811 [2024-11-20 16:21:57.587023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.587033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.811 [2024-11-20 16:21:57.587038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.587049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.811 [2024-11-20 16:21:57.587054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.587064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.811 [2024-11-20 16:21:57.587069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.587080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.811 [2024-11-20 16:21:57.587085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.587097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.811 [2024-11-20 16:21:57.587103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.587113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.811 [2024-11-20 16:21:57.587118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.587128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.811 [2024-11-20 16:21:57.587134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.587144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.811 [2024-11-20 16:21:57.587149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.587164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.811 [2024-11-20 16:21:57.587172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.587182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.811 [2024-11-20 16:21:57.587187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.587198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.811 [2024-11-20 16:21:57.587204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:36.811 [2024-11-20 16:21:57.587214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.587219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.587230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.587235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.587737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.587746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.587757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.587762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.587773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.587778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.587790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.587795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.587806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.587812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.587822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.587827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.587838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.587843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.587854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.587860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.587870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.587875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.587885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.587891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.587901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.587906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.587917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.587923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.587933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.587939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.587949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.587954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.587964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.587970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.587981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.587988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.587998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.588003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.588014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.588019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.588030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.588036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.588046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.588051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.588062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.588067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.588078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.588083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.588094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.588099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.588110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.588115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.588126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.588132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.588142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.588147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.588162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.588167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.599103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.599132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.599146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.599153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.599170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.599177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:36.812 [2024-11-20 16:21:57.599189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.812 [2024-11-20 16:21:57.599194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.599990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.599996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.600009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.600014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.600026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.600032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.600044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.600050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.600062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.600068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.600080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.600086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.600098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.600104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.600115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.600121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.600133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.600139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.600151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.600157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.600175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.600181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.600192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.600201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.600212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.600218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.600229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.813 [2024-11-20 16:21:57.600235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.600247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.813 [2024-11-20 16:21:57.600254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.600265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.813 [2024-11-20 16:21:57.600271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:36.813 [2024-11-20 16:21:57.600282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.813 [2024-11-20 16:21:57.600289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.814 [2024-11-20 16:21:57.600669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:36.814 [2024-11-20 16:21:57.600984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.814 [2024-11-20 16:21:57.600990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.601002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.815 [2024-11-20 16:21:57.601008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.601019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.815 [2024-11-20 16:21:57.601025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.601038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.815 [2024-11-20 16:21:57.601044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.601055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.815 [2024-11-20 16:21:57.601061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.601073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.815 [2024-11-20 16:21:57.601082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.601094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.815 [2024-11-20 16:21:57.601100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.601111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.601117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.601129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.601136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.601148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.601154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.601170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.601176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.601188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.601194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.601206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.601212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.601224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.601230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.601241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.601248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.601260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.601265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.601277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.601282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.601294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.601302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.601314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.601319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.602076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.602086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.602099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.602105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.602117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.602124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.602136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.602142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.602154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.602164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.602177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.602183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.602195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.602201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.602212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.602218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.602230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.602237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.602248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.602254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.602265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.602271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.602286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.602292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.602303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.602309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.602321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.602328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.602339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.602345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.602357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.602362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.602374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.602380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.602392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.602398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.602409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.602415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.602427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.602433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:36.815 [2024-11-20 16:21:57.602444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.815 [2024-11-20 16:21:57.602451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.602463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.602469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.602481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.602487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.602500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.602506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.602517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.602524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.602535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.602541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.602553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.602559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.602570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.602576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.602588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.602594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.602606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.602612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.602623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.602630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.602641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.602647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.602659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.602665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.602677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.602683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.603139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.603148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.603166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.603175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.603188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.603194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.603205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.603212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.603224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.603230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.603241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.603248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.603260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.603266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.603277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.603283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.603296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.603301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.603313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.603319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.603331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.603337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.603348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.603354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.603366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.603373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.603384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.603392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.603403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.603410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.603421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.603427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.603439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.603445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.611195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.611222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.611240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.611250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.611266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.611274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.611290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.611298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.611314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.611323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.611338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.816 [2024-11-20 16:21:57.611346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:36.816 [2024-11-20 16:21:57.611362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.817 [2024-11-20 16:21:57.611370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.611386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.817 [2024-11-20 16:21:57.611395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.611410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.817 [2024-11-20 16:21:57.611418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.611439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.817 [2024-11-20 16:21:57.611447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.611463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.817 [2024-11-20 16:21:57.611472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.611487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.817 [2024-11-20 16:21:57.611495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.611511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.817 [2024-11-20 16:21:57.611519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.611534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.817 [2024-11-20 16:21:57.611543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.611559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.817 [2024-11-20 16:21:57.611566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.611582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.817 [2024-11-20 16:21:57.611590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.611606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.817 [2024-11-20 16:21:57.611615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.817 [2024-11-20 16:21:57.612710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.817 [2024-11-20 16:21:57.612735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:36.817 [2024-11-20 16:21:57.612751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.612759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.612779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.612788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.612804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.612813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.612829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.612837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.612853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.612863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.612879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.612887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.612903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.612912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.612928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.612936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.612951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.612960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.612975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.612983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.612999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.613007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.613030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.613054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.613078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.613102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.613125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.613157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.613185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.613208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.613232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.613255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.613278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.613302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.818 [2024-11-20 16:21:57.613326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.818 [2024-11-20 16:21:57.613350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.818 [2024-11-20 16:21:57.613374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.818 [2024-11-20 16:21:57.613398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.818 [2024-11-20 16:21:57.613421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.818 [2024-11-20 16:21:57.613446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.818 [2024-11-20 16:21:57.613472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.818 [2024-11-20 16:21:57.613496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.818 [2024-11-20 16:21:57.613519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.818 [2024-11-20 16:21:57.613543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.818 [2024-11-20 16:21:57.613567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.818 [2024-11-20 16:21:57.613591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.818 [2024-11-20 16:21:57.613616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.818 [2024-11-20 16:21:57.613639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.818 [2024-11-20 16:21:57.613663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:36.818 [2024-11-20 16:21:57.613679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.613687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.613702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.613711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.613726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.613734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.613751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.613760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.613775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.613783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.613799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.613807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.613822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.613830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.613846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.613854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.613870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.613878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.613894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.613902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.613918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.613925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.613942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.613949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.613965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.613973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.613988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.613996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.614012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.614020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.614035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.614045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.614060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.614068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.614084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.614092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.614108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.614116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.614132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.614140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.614156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.614169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.614185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.614193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.614208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.614216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.614232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.614240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.614256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.614264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.614280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.614288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.614304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.614311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.614327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.614337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.614353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.614360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.614376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.614384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.614400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.614408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.615429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.615443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.615461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.615470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.615485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.615494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.615510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.615518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.615534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.615541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.615557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.615565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.615581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.615589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.615605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.819 [2024-11-20 16:21:57.615613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:36.819 [2024-11-20 16:21:57.615628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.615637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.615658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.615667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.615682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.615690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.615705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.615713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.615729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.615737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.615753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.615761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.615776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.615784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.615800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.615808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.615823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.615831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.615847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.615854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.615870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.615877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.615894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.615902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.615918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.615927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.615945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.615953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.615969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.615977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.615993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.616001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.616017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.616026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.616041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.616049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.616065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.616073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.616089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.616097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.616113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.616121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.616136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.616145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.616165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.616174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.616190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.616198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.616214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.616221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.616238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.616247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.616832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.820 [2024-11-20 16:21:57.616848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.616872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.820 [2024-11-20 16:21:57.616883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.616904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.820 [2024-11-20 16:21:57.616915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.616936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.820 [2024-11-20 16:21:57.616947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.616969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.820 [2024-11-20 16:21:57.616979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.617001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.820 [2024-11-20 16:21:57.617012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.617033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.820 [2024-11-20 16:21:57.617044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.617065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.820 [2024-11-20 16:21:57.617076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.617097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.820 [2024-11-20 16:21:57.617108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.617129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.820 [2024-11-20 16:21:57.617140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.617167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.820 [2024-11-20 16:21:57.617178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.617199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.820 [2024-11-20 16:21:57.617213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.617235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.820 [2024-11-20 16:21:57.617246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:36.820 [2024-11-20 16:21:57.617268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.617293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.617315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.617327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.617348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.617359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.617381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.617392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.617414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.617425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.617447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.617458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.617479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.617490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.617511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.617522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.617543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.617553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.617575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.617586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.617607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.617618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.617641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.821 [2024-11-20 16:21:57.617652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.617673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.617684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.617706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.617717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.617739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.617749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.617771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.617782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.617804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.617815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.617836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.617847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.617868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.617880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.617901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.617911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.617933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.617944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.617965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.617976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.617998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.618008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.618031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.618042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.618064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.618076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.618097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.618108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.618130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.618140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.618166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.618177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.618199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.618210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.618231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.618243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.618263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.618275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.618296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.618306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.618328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.618339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.618360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.618371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:36.821 [2024-11-20 16:21:57.618392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.821 [2024-11-20 16:21:57.618404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.618425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.822 [2024-11-20 16:21:57.618438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.618460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.618471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.618492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.618503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.618524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.618535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.618557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.618568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.618589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.618600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.619341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.619357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.619381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.619391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.619412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.619422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.619443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.619454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.619475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.619485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.619506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.619516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.619537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.619550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.619571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.619582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.619602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.619613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.619634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.619644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.619665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.619676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.619696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.619707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.619728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.619738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.619758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.619768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.619789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.619799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.619820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.619830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.619851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.619860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.619881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.619891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.619912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.619922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.619945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.619955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.619976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.619986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.620007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.620017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.620037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.620048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.620068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.620078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.620099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.620109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.620130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.620140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.620166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.620178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.620199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.620210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.620230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.620240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.620261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.620271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.620292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.620302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.620325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.620336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:36.822 [2024-11-20 16:21:57.620357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.822 [2024-11-20 16:21:57.620368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:57.620389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:57.620399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:57.620419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:57.620432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:57.620453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:57.620464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:57.620484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:57.620494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:57.620516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:57.620526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:57.620547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:57.620557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:57.620578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:57.620588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:57.620609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:57.620619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:57.620640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:57.620650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:57.620671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:57.620682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:57.620703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:57.620716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:57.620737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:57.620747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:57.620768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:57.620778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.021971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.021982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.022003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.022013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.022035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.022045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:36.823 [2024-11-20 16:21:58.022066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.823 [2024-11-20 16:21:58.022076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.022098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.824 [2024-11-20 16:21:58.022108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.022595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.824 [2024-11-20 16:21:58.022617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.022658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.824 [2024-11-20 16:21:58.022669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.022698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.022710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.022739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.022750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.022778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.022788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.022817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.022828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.022857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.022868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.022900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.022911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.022939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.022950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.022979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.022989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.824 [2024-11-20 16:21:58.023627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.023972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.023982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.024011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.024022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.024050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.024058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:36.824 [2024-11-20 16:21:58.024077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.824 [2024-11-20 16:21:58.024084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:21:58.024103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.825 [2024-11-20 16:21:58.024110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:21:58.024129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.825 [2024-11-20 16:21:58.024136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:21:58.024156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.825 [2024-11-20 16:21:58.024165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:21:58.024184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.825 [2024-11-20 16:21:58.024191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:21:58.024210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.825 [2024-11-20 16:21:58.024217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:21:58.024236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.825 [2024-11-20 16:21:58.024244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:21:58.024263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.825 [2024-11-20 16:21:58.024270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:21:58.024289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.825 [2024-11-20 16:21:58.024296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:21:58.024314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.825 [2024-11-20 16:21:58.024322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:21:58.024341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.825 [2024-11-20 16:21:58.024348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:21:58.024366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.825 [2024-11-20 16:21:58.024373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:21:58.024392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.825 [2024-11-20 16:21:58.024399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:21:58.024418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.825 [2024-11-20 16:21:58.024424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:21:58.024443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.825 [2024-11-20 16:21:58.024449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:21:58.024468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.825 [2024-11-20 16:21:58.024475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:21:58.024494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.825 [2024-11-20 16:21:58.024500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:21:58.024629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.825 [2024-11-20 16:21:58.024639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:36.825 11474.15 IOPS, 44.82 MiB/s [2024-11-20T15:22:12.761Z] 10654.57 IOPS, 41.62 MiB/s [2024-11-20T15:22:12.761Z] 9944.27 IOPS, 38.84 MiB/s [2024-11-20T15:22:12.761Z] 9774.94 IOPS, 38.18 MiB/s [2024-11-20T15:22:12.761Z] 9959.47 IOPS, 38.90 MiB/s [2024-11-20T15:22:12.761Z] 10314.22 IOPS, 40.29 MiB/s [2024-11-20T15:22:12.761Z] 10656.00 IOPS, 41.62 MiB/s [2024-11-20T15:22:12.761Z] 10900.70 IOPS, 42.58 MiB/s [2024-11-20T15:22:12.761Z] 10990.19 IOPS, 42.93 MiB/s [2024-11-20T15:22:12.761Z] 11071.73 IOPS, 43.25 MiB/s [2024-11-20T15:22:12.761Z] 11272.83 IOPS, 44.03 MiB/s [2024-11-20T15:22:12.761Z] 11509.83 IOPS, 44.96 MiB/s [2024-11-20T15:22:12.761Z] [2024-11-20 16:22:10.328280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.825 [2024-11-20 16:22:10.328318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:22:10.328335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.825 [2024-11-20 16:22:10.328341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:22:10.328352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.825 [2024-11-20 16:22:10.328358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:22:10.328368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.825 [2024-11-20 16:22:10.328373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:22:10.329189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.825 [2024-11-20 16:22:10.329197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:22:10.329207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.825 [2024-11-20 16:22:10.329213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:22:10.329224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.825 [2024-11-20 16:22:10.329229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:22:10.329239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.825 [2024-11-20 16:22:10.329244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:22:10.329255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.825 [2024-11-20 16:22:10.329261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:22:10.329467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.825 [2024-11-20 16:22:10.329478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:22:10.329490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.825 [2024-11-20 16:22:10.329495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:22:10.329506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.825 [2024-11-20 16:22:10.329511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:22:10.329526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.825 [2024-11-20 16:22:10.329532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:22:10.329542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.825 [2024-11-20 16:22:10.329547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:22:10.329557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.825 [2024-11-20 16:22:10.329562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.825 [2024-11-20 16:22:10.329573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.329578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.329594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.329609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.329625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.329641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.329656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.329672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.329687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.329704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.329721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.329737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.329752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.329768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.329783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.329798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.329814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.329830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.329845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.329860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.329876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.329891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.826 [2024-11-20 16:22:10.329907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.826 [2024-11-20 16:22:10.329923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.329933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.329939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.330709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.330720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.330732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.330737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.330748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.330753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.330764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.330769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.330779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.330784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.330794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.330800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.330810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.330815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.330825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.330830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.330841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.330846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.330856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.330864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.330874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.330879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.330890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.330895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.330906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.330911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.330922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.330927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.330937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.826 [2024-11-20 16:22:10.330942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:36.826 [2024-11-20 16:22:10.330953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.330958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.330968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.330973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.330983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.330988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.330999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.331004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.331014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.331020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.331030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.331035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.331045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.331051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.331062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.331068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.331078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.331083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.331093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.331098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.331108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.331113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.331124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.331128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.331139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.331144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.331154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.331164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.331175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.331180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.331191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.827 [2024-11-20 16:22:10.331196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.331206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.331213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.331224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.331229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.331240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.331246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.332067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.827 [2024-11-20 16:22:10.332077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.332090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.827 [2024-11-20 16:22:10.332095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.332106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.827 [2024-11-20 16:22:10.332111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.332121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.827 [2024-11-20 16:22:10.332127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.332137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.827 [2024-11-20 16:22:10.332142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.332153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.827 [2024-11-20 16:22:10.332162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.332173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.827 [2024-11-20 16:22:10.332179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.332190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.827 [2024-11-20 16:22:10.332195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.332206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.827 [2024-11-20 16:22:10.332212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.332223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.827 [2024-11-20 16:22:10.332229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.332240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.827 [2024-11-20 16:22:10.332245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.332256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.332261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.332272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.827 [2024-11-20 16:22:10.332279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.332290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.332296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.332306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.332312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.332322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.332327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.332338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.332343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.332354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.332359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.332369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.332374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.332384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.827 [2024-11-20 16:22:10.332390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:36.827 [2024-11-20 16:22:10.332401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.332407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.332417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.332422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.332433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.332438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.332449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.332454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.332464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.332473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.332483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.332488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.332499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.332504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.333089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.828 [2024-11-20 16:22:10.333106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.333122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.333138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.333153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.333174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.333189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.333205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.333221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.333236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.333254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.333270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.333285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.333301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.333317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.333332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.333348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.333363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.333379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.828 [2024-11-20 16:22:10.333394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.828 [2024-11-20 16:22:10.333409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.828 [2024-11-20 16:22:10.333425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.828 [2024-11-20 16:22:10.333442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.828 [2024-11-20 16:22:10.333458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.828 [2024-11-20 16:22:10.333473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.828 [2024-11-20 16:22:10.333489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.828 [2024-11-20 16:22:10.333505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.828 [2024-11-20 16:22:10.333520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.828 [2024-11-20 16:22:10.333536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.828 [2024-11-20 16:22:10.333552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.828 [2024-11-20 16:22:10.333567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:36.828 [2024-11-20 16:22:10.333577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.828 [2024-11-20 16:22:10.333582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.333593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.829 [2024-11-20 16:22:10.333598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.333608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.829 [2024-11-20 16:22:10.333613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.333624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.829 [2024-11-20 16:22:10.333631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.334049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.829 [2024-11-20 16:22:10.334058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.334069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.829 [2024-11-20 16:22:10.334075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.334085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.829 [2024-11-20 16:22:10.334091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.334101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.829 [2024-11-20 16:22:10.334107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.334117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.829 [2024-11-20 16:22:10.334122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.334133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.829 [2024-11-20 16:22:10.334138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.334148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.829 [2024-11-20 16:22:10.334153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.334168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.829 [2024-11-20 16:22:10.334173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.334184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.829 [2024-11-20 16:22:10.334189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.334199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.829 [2024-11-20 16:22:10.334204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.334215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.829 [2024-11-20 16:22:10.334220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.334230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.829 [2024-11-20 16:22:10.334238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.334248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.829 [2024-11-20 16:22:10.334253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.334264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.829 [2024-11-20 16:22:10.334269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.334927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.829 [2024-11-20 16:22:10.334937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.334949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.829 [2024-11-20 16:22:10.334954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.334965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.829 [2024-11-20 16:22:10.334971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.334981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.829 [2024-11-20 16:22:10.334986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.334996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.829 [2024-11-20 16:22:10.335002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.335012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.829 [2024-11-20 16:22:10.335018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.335028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.829 [2024-11-20 16:22:10.335033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.335043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.829 [2024-11-20 16:22:10.335048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.335059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.829 [2024-11-20 16:22:10.335064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.335074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.829 [2024-11-20 16:22:10.335079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.335092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.829 [2024-11-20 16:22:10.335097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.335107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.829 [2024-11-20 16:22:10.335112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.335123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.829 [2024-11-20 16:22:10.335128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.335138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.829 [2024-11-20 16:22:10.335143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.335153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.829 [2024-11-20 16:22:10.335163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.335174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.829 [2024-11-20 16:22:10.335179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.335190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.829 [2024-11-20 16:22:10.335195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.335205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.829 [2024-11-20 16:22:10.335210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.335220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.829 [2024-11-20 16:22:10.335226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.335448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.829 [2024-11-20 16:22:10.335456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.335472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.829 [2024-11-20 16:22:10.335477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:36.829 [2024-11-20 16:22:10.335488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.829 [2024-11-20 16:22:10.335493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.335505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.830 [2024-11-20 16:22:10.335510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.335521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.830 [2024-11-20 16:22:10.335526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.335537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.830 [2024-11-20 16:22:10.335542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.335552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.830 [2024-11-20 16:22:10.335557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.335567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.830 [2024-11-20 16:22:10.335573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.335583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.830 [2024-11-20 16:22:10.335588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.335599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.830 [2024-11-20 16:22:10.335604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.335614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.830 [2024-11-20 16:22:10.335620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.335630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.830 [2024-11-20 16:22:10.335636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.335646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.830 [2024-11-20 16:22:10.335651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.335661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.830 [2024-11-20 16:22:10.335666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.335677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.830 [2024-11-20 16:22:10.335682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.335693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.830 [2024-11-20 16:22:10.335699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.335709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.830 [2024-11-20 16:22:10.335714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.335725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.830 [2024-11-20 16:22:10.335730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.335741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.830 [2024-11-20 16:22:10.335746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.335757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.830 [2024-11-20 16:22:10.335762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.335772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.830 [2024-11-20 16:22:10.335777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.336142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.830 [2024-11-20 16:22:10.336151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.336166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.830 [2024-11-20 16:22:10.336172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.336183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.830 [2024-11-20 16:22:10.336188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.336198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.830 [2024-11-20 16:22:10.336203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.336214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.830 [2024-11-20 16:22:10.336219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.336229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.830 [2024-11-20 16:22:10.336235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.336245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.830 [2024-11-20 16:22:10.336252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.336263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.830 [2024-11-20 16:22:10.336268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.336278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.830 [2024-11-20 16:22:10.336283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.336294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.830 [2024-11-20 16:22:10.336299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.336309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.830 [2024-11-20 16:22:10.336314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.336325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.830 [2024-11-20 16:22:10.336330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.336341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.830 [2024-11-20 16:22:10.336346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.336357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.830 [2024-11-20 16:22:10.336362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.336372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.830 [2024-11-20 16:22:10.336377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.336388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.830 [2024-11-20 16:22:10.336393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:36.830 [2024-11-20 16:22:10.337375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.830 [2024-11-20 16:22:10.337386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.337398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.831 [2024-11-20 16:22:10.337403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.337414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.831 [2024-11-20 16:22:10.337419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.337432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.831 [2024-11-20 16:22:10.337437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.337448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.831 [2024-11-20 16:22:10.337453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.337464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.831 [2024-11-20 16:22:10.337469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.337480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.831 [2024-11-20 16:22:10.337485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.337495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.831 [2024-11-20 16:22:10.337501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.337511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.831 [2024-11-20 16:22:10.337516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.337526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.831 [2024-11-20 16:22:10.337531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.337542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.831 [2024-11-20 16:22:10.337547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.337558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.831 [2024-11-20 16:22:10.337563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.337573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.831 [2024-11-20 16:22:10.337578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.337588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.831 [2024-11-20 16:22:10.337593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.337604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.831 [2024-11-20 16:22:10.337610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.337621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.831 [2024-11-20 16:22:10.337626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.337637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.831 [2024-11-20 16:22:10.337641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.337652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.831 [2024-11-20 16:22:10.337657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.337668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.831 [2024-11-20 16:22:10.337673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.347349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.831 [2024-11-20 16:22:10.347370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.347382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.831 [2024-11-20 16:22:10.347388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.347400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.831 [2024-11-20 16:22:10.347405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.347416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.831 [2024-11-20 16:22:10.347421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.349568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.831 [2024-11-20 16:22:10.349584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.349597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.831 [2024-11-20 16:22:10.349603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.349613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.831 [2024-11-20 16:22:10.349619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.349629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.831 [2024-11-20 16:22:10.349635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.349646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.831 [2024-11-20 16:22:10.349654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.349665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.831 [2024-11-20 16:22:10.349670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.349680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.831 [2024-11-20 16:22:10.349686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.349696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.831 [2024-11-20 16:22:10.349701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.349711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.831 [2024-11-20 16:22:10.349716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.349726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.831 [2024-11-20 16:22:10.349732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.349742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.831 [2024-11-20 16:22:10.349747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.349758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.831 [2024-11-20 16:22:10.349763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.349773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.831 [2024-11-20 16:22:10.349778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.349789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.831 [2024-11-20 16:22:10.349794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.349804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.831 [2024-11-20 16:22:10.349809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.349819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.831 [2024-11-20 16:22:10.349825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:36.831 [2024-11-20 16:22:10.349835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.831 [2024-11-20 16:22:10.349842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.349852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.832 [2024-11-20 16:22:10.349857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.349867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.832 [2024-11-20 16:22:10.349873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.349883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.832 [2024-11-20 16:22:10.349889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.349899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.832 [2024-11-20 16:22:10.349905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.349915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.832 [2024-11-20 16:22:10.349920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.349931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.832 [2024-11-20 16:22:10.349936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.349946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.832 [2024-11-20 16:22:10.349952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.349962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.832 [2024-11-20 16:22:10.349967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.349978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.832 [2024-11-20 16:22:10.349983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.349993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.832 [2024-11-20 16:22:10.349998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.350009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.832 [2024-11-20 16:22:10.350014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.350024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.832 [2024-11-20 16:22:10.350029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.350041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.832 [2024-11-20 16:22:10.350047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.350057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.832 [2024-11-20 16:22:10.350062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.350072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.832 [2024-11-20 16:22:10.350077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.350087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.832 [2024-11-20 16:22:10.350093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.350103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.832 [2024-11-20 16:22:10.350108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.350118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.832 [2024-11-20 16:22:10.350123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.350134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.832 [2024-11-20 16:22:10.350139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.350149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.832 [2024-11-20 16:22:10.350155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.350170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.832 [2024-11-20 16:22:10.350176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.350186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.832 [2024-11-20 16:22:10.350191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.350202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.832 [2024-11-20 16:22:10.350207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.350217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.832 [2024-11-20 16:22:10.350222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.350234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.832 [2024-11-20 16:22:10.350240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.351631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.832 [2024-11-20 16:22:10.351645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.351657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.832 [2024-11-20 16:22:10.351662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.351673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.832 [2024-11-20 16:22:10.351678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.351688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.832 [2024-11-20 16:22:10.351693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.351704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.832 [2024-11-20 16:22:10.351710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.351720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.832 [2024-11-20 16:22:10.351725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.351735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.832 [2024-11-20 16:22:10.351740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.351751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.832 [2024-11-20 16:22:10.351756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.351767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.832 [2024-11-20 16:22:10.351772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.351782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.832 [2024-11-20 16:22:10.351787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:36.832 [2024-11-20 16:22:10.351798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.833 [2024-11-20 16:22:10.351803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.351818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.833 [2024-11-20 16:22:10.351823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.351834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.833 [2024-11-20 16:22:10.351839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.351849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.833 [2024-11-20 16:22:10.351855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.351865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.833 [2024-11-20 16:22:10.351870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.351881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.833 [2024-11-20 16:22:10.351886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.351896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.833 [2024-11-20 16:22:10.351901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.351911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.833 [2024-11-20 16:22:10.351916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.351927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.833 [2024-11-20 16:22:10.351932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.351942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.833 [2024-11-20 16:22:10.351947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.351957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.833 [2024-11-20 16:22:10.351962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.351973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.833 [2024-11-20 16:22:10.351978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.351988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.833 [2024-11-20 16:22:10.351993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.352003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.833 [2024-11-20 16:22:10.352010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.352021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.833 [2024-11-20 16:22:10.352027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.352040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.833 [2024-11-20 16:22:10.352046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.352060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.833 [2024-11-20 16:22:10.352067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.352081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.833 [2024-11-20 16:22:10.352088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.352102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.833 [2024-11-20 16:22:10.352108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.352122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.833 [2024-11-20 16:22:10.352129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.352143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.833 [2024-11-20 16:22:10.352149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.352169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.833 [2024-11-20 16:22:10.352176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.353050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.833 [2024-11-20 16:22:10.353065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.353080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.833 [2024-11-20 16:22:10.353087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.353101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.833 [2024-11-20 16:22:10.353108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.353122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.833 [2024-11-20 16:22:10.353131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.353146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.833 [2024-11-20 16:22:10.353152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.353171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.833 [2024-11-20 16:22:10.353179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.353193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.833 [2024-11-20 16:22:10.353199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.353213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.833 [2024-11-20 16:22:10.353220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.353234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.833 [2024-11-20 16:22:10.353241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.353255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.833 [2024-11-20 16:22:10.353262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.353275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.833 [2024-11-20 16:22:10.353283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.353296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.833 [2024-11-20 16:22:10.353303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.353317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.833 [2024-11-20 16:22:10.353324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.353337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.833 [2024-11-20 16:22:10.353345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.353359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.833 [2024-11-20 16:22:10.353365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.353379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.833 [2024-11-20 16:22:10.353386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:36.833 [2024-11-20 16:22:10.353402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.833 [2024-11-20 16:22:10.353409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.353423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.834 [2024-11-20 16:22:10.353430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.353443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.834 [2024-11-20 16:22:10.353450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.353464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.834 [2024-11-20 16:22:10.353470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.353484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.834 [2024-11-20 16:22:10.353491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.353505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.834 [2024-11-20 16:22:10.353511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.353525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.834 [2024-11-20 16:22:10.353532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.353546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.834 [2024-11-20 16:22:10.353553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.353978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.834 [2024-11-20 16:22:10.353989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.834 [2024-11-20 16:22:10.354012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.834 [2024-11-20 16:22:10.354032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.834 [2024-11-20 16:22:10.354053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.834 [2024-11-20 16:22:10.354077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.834 [2024-11-20 16:22:10.354097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.834 [2024-11-20 16:22:10.354118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.834 [2024-11-20 16:22:10.354138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.834 [2024-11-20 16:22:10.354166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.834 [2024-11-20 16:22:10.354187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.834 [2024-11-20 16:22:10.354207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.834 [2024-11-20 16:22:10.354228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.834 [2024-11-20 16:22:10.354248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.834 [2024-11-20 16:22:10.354269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.834 [2024-11-20 16:22:10.354290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.834 [2024-11-20 16:22:10.354310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.834 [2024-11-20 16:22:10.354333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.834 [2024-11-20 16:22:10.354353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.834 [2024-11-20 16:22:10.354374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.834 [2024-11-20 16:22:10.354395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.834 [2024-11-20 16:22:10.354415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.834 [2024-11-20 16:22:10.354436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.834 [2024-11-20 16:22:10.354457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.354471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.834 [2024-11-20 16:22:10.354478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.355301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.834 [2024-11-20 16:22:10.355315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.355330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.834 [2024-11-20 16:22:10.355337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:36.834 [2024-11-20 16:22:10.355352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.835 [2024-11-20 16:22:10.355358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.355372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.835 [2024-11-20 16:22:10.355379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.355393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.835 [2024-11-20 16:22:10.355402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.355416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.835 [2024-11-20 16:22:10.355423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.355437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.835 [2024-11-20 16:22:10.355444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.355457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.835 [2024-11-20 16:22:10.355464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.355478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.835 [2024-11-20 16:22:10.355485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.355498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.835 [2024-11-20 16:22:10.355505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.355519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.835 [2024-11-20 16:22:10.355525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.355539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.835 [2024-11-20 16:22:10.355546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.355560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.835 [2024-11-20 16:22:10.355566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.355580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.835 [2024-11-20 16:22:10.355587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.355601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.835 [2024-11-20 16:22:10.355607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.355621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.835 [2024-11-20 16:22:10.355628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.355641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.835 [2024-11-20 16:22:10.355648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.355663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.835 [2024-11-20 16:22:10.355670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.355683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.835 [2024-11-20 16:22:10.355690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.355704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.835 [2024-11-20 16:22:10.355711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.355724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.835 [2024-11-20 16:22:10.355731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.355745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.835 [2024-11-20 16:22:10.355751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.355765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.835 [2024-11-20 16:22:10.355772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.355786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.835 [2024-11-20 16:22:10.355792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.357223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.835 [2024-11-20 16:22:10.357236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.357252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.835 [2024-11-20 16:22:10.357259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.357273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.835 [2024-11-20 16:22:10.357280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.357294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.835 [2024-11-20 16:22:10.357301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.357314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.835 [2024-11-20 16:22:10.357321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.357338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.835 [2024-11-20 16:22:10.357346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.357360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.835 [2024-11-20 16:22:10.357366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.357380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.835 [2024-11-20 16:22:10.357387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.357401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.835 [2024-11-20 16:22:10.357407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.357421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.835 [2024-11-20 16:22:10.357428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.357442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.835 [2024-11-20 16:22:10.357448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.357462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.835 [2024-11-20 16:22:10.357469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.357482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.835 [2024-11-20 16:22:10.357489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.357503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.835 [2024-11-20 16:22:10.357510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.357524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.835 [2024-11-20 16:22:10.357531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:36.835 [2024-11-20 16:22:10.357544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.836 [2024-11-20 16:22:10.357551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.357565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.836 [2024-11-20 16:22:10.357571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.357585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.836 [2024-11-20 16:22:10.357594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.357608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.836 [2024-11-20 16:22:10.357615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.357628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.836 [2024-11-20 16:22:10.357635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.357649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.836 [2024-11-20 16:22:10.357656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.357669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.836 [2024-11-20 16:22:10.357676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.357690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.836 [2024-11-20 16:22:10.357697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.357711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.836 [2024-11-20 16:22:10.357717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.357731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.836 [2024-11-20 16:22:10.357738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.357752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.836 [2024-11-20 16:22:10.357758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.357772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.836 [2024-11-20 16:22:10.357779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.357793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.836 [2024-11-20 16:22:10.357800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.357814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.836 [2024-11-20 16:22:10.357820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.357834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.836 [2024-11-20 16:22:10.357842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.357856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.836 [2024-11-20 16:22:10.357863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.357877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.836 [2024-11-20 16:22:10.357884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.357897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.836 [2024-11-20 16:22:10.357904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.357918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.836 [2024-11-20 16:22:10.357925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.359973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.836 [2024-11-20 16:22:10.359990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.360007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.836 [2024-11-20 16:22:10.360014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.360028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.836 [2024-11-20 16:22:10.360035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.360049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.836 [2024-11-20 16:22:10.360056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.360071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.836 [2024-11-20 16:22:10.360079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.360093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.836 [2024-11-20 16:22:10.360100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.360113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.836 [2024-11-20 16:22:10.360120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.360134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.836 [2024-11-20 16:22:10.360141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.360162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.836 [2024-11-20 16:22:10.360169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.360183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.836 [2024-11-20 16:22:10.360190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.360203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.836 [2024-11-20 16:22:10.360210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.360224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.836 [2024-11-20 16:22:10.360231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.360245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.836 [2024-11-20 16:22:10.360251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.360265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.836 [2024-11-20 16:22:10.360272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.360286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.836 [2024-11-20 16:22:10.360293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.360307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.836 [2024-11-20 16:22:10.360313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.360327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.836 [2024-11-20 16:22:10.360334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:36.836 [2024-11-20 16:22:10.360348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.836 [2024-11-20 16:22:10.360355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.360368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.837 [2024-11-20 16:22:10.360375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.360390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.837 [2024-11-20 16:22:10.360398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.360413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.837 [2024-11-20 16:22:10.360420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.360434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.837 [2024-11-20 16:22:10.360441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.360454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.837 [2024-11-20 16:22:10.360462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.360476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.837 [2024-11-20 16:22:10.360482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.360496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.837 [2024-11-20 16:22:10.360503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.360517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.837 [2024-11-20 16:22:10.360524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.360538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.837 [2024-11-20 16:22:10.360545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.360558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.837 [2024-11-20 16:22:10.360565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.360579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.837 [2024-11-20 16:22:10.360586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.360599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.837 [2024-11-20 16:22:10.360606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.360620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.837 [2024-11-20 16:22:10.360627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.360641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.837 [2024-11-20 16:22:10.360648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.360662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.837 [2024-11-20 16:22:10.360671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.360684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.837 [2024-11-20 16:22:10.360692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.360706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.837 [2024-11-20 16:22:10.360712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.360726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.837 [2024-11-20 16:22:10.360734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.362011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.837 [2024-11-20 16:22:10.362026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.362041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.837 [2024-11-20 16:22:10.362049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.362063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.837 [2024-11-20 16:22:10.362070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.362086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.837 [2024-11-20 16:22:10.362093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.362106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.837 [2024-11-20 16:22:10.362113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.362129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.837 [2024-11-20 16:22:10.362135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.362149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.837 [2024-11-20 16:22:10.362157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.362177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.837 [2024-11-20 16:22:10.362184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.362197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.837 [2024-11-20 16:22:10.362208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.362222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.837 [2024-11-20 16:22:10.362229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.362242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.837 [2024-11-20 16:22:10.362250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.362265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.837 [2024-11-20 16:22:10.362271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.362285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.837 [2024-11-20 16:22:10.362292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.362305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.837 [2024-11-20 16:22:10.362312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.362326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.837 [2024-11-20 16:22:10.362333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.362346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.837 [2024-11-20 16:22:10.362353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.362367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.837 [2024-11-20 16:22:10.362374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.362387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.837 [2024-11-20 16:22:10.362394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.362408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.837 [2024-11-20 16:22:10.362415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.362429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.837 [2024-11-20 16:22:10.362435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:36.837 [2024-11-20 16:22:10.362449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.837 [2024-11-20 16:22:10.362456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.362473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.838 [2024-11-20 16:22:10.362480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.362494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.838 [2024-11-20 16:22:10.362501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.362514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.838 [2024-11-20 16:22:10.362521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.362535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.838 [2024-11-20 16:22:10.362542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.362555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.838 [2024-11-20 16:22:10.362562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.362576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.838 [2024-11-20 16:22:10.362582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.362596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.838 [2024-11-20 16:22:10.362603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.362617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.838 [2024-11-20 16:22:10.362623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.362637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.838 [2024-11-20 16:22:10.362644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.362657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.838 [2024-11-20 16:22:10.362664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.362678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.838 [2024-11-20 16:22:10.362685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.362699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.838 [2024-11-20 16:22:10.362706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.362721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.838 [2024-11-20 16:22:10.362728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.362741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.838 [2024-11-20 16:22:10.362748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.362762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.838 [2024-11-20 16:22:10.362769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.362783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.838 [2024-11-20 16:22:10.362790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.364597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.838 [2024-11-20 16:22:10.364612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.364624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.838 [2024-11-20 16:22:10.364629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.364640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.838 [2024-11-20 16:22:10.364645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.364655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.838 [2024-11-20 16:22:10.364661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.364671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.838 [2024-11-20 16:22:10.364676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.364686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.838 [2024-11-20 16:22:10.364691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.364701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.838 [2024-11-20 16:22:10.364706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.364717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.838 [2024-11-20 16:22:10.364722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.364732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.838 [2024-11-20 16:22:10.364739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.364750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.838 [2024-11-20 16:22:10.364755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.364765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.838 [2024-11-20 16:22:10.364771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.364781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.838 [2024-11-20 16:22:10.364786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.364796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.838 [2024-11-20 16:22:10.364801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.364812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.838 [2024-11-20 16:22:10.364817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.364827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.838 [2024-11-20 16:22:10.364832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.364843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.838 [2024-11-20 16:22:10.364848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.364858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.838 [2024-11-20 16:22:10.364863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.364873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.838 [2024-11-20 16:22:10.364878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.364889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.838 [2024-11-20 16:22:10.364894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.364904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.838 [2024-11-20 16:22:10.364911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:36.838 [2024-11-20 16:22:10.364921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.839 [2024-11-20 16:22:10.364928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.364939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.839 [2024-11-20 16:22:10.364944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.364954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.839 [2024-11-20 16:22:10.364960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.364970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.839 [2024-11-20 16:22:10.364975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.364986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.839 [2024-11-20 16:22:10.364991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.365001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.839 [2024-11-20 16:22:10.365007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.365017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.839 [2024-11-20 16:22:10.365023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.365033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.839 [2024-11-20 16:22:10.365038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.365049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.839 [2024-11-20 16:22:10.365054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.365064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.839 [2024-11-20 16:22:10.365070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.365080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.839 [2024-11-20 16:22:10.365085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.365096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.839 [2024-11-20 16:22:10.365101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.365111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.839 [2024-11-20 16:22:10.365116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.365128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.839 [2024-11-20 16:22:10.365133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.365144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.839 [2024-11-20 16:22:10.365149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.365164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.839 [2024-11-20 16:22:10.365170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.365181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.839 [2024-11-20 16:22:10.365186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.365197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.839 [2024-11-20 16:22:10.365202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.365874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.839 [2024-11-20 16:22:10.365884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.365896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.839 [2024-11-20 16:22:10.365902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.365913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.839 [2024-11-20 16:22:10.365918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.365928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.839 [2024-11-20 16:22:10.365934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.365944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.839 [2024-11-20 16:22:10.365949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.365960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.839 [2024-11-20 16:22:10.365965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.365975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.839 [2024-11-20 16:22:10.365981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.365993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.839 [2024-11-20 16:22:10.365999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.366009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.839 [2024-11-20 16:22:10.366014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.366025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.839 [2024-11-20 16:22:10.366030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.366041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.839 [2024-11-20 16:22:10.366046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.366056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.839 [2024-11-20 16:22:10.366062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.366866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.839 [2024-11-20 16:22:10.366877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.366890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.839 [2024-11-20 16:22:10.366896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.366906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.839 [2024-11-20 16:22:10.366912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.366923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.839 [2024-11-20 16:22:10.366928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:36.839 [2024-11-20 16:22:10.366939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.366944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.366954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.366960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.366970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.366975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.366988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.366993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.367004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.367010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.367020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.367025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.367035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.840 [2024-11-20 16:22:10.367041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.367051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.840 [2024-11-20 16:22:10.367056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.367066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.840 [2024-11-20 16:22:10.367072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.367082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.840 [2024-11-20 16:22:10.367087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.367098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.367103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.367113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.367119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.367129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.367134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.367145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.367150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.367164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.840 [2024-11-20 16:22:10.367170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.367181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.840 [2024-11-20 16:22:10.367188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.367198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.840 [2024-11-20 16:22:10.367203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.367214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.367218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.367229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.840 [2024-11-20 16:22:10.367234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.367244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.367250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.367260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.367265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.367275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.367280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.367290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.367296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.367306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.840 [2024-11-20 16:22:10.367311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.367322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.367327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.368119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.840 [2024-11-20 16:22:10.368130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.368142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.840 [2024-11-20 16:22:10.368147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.368161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.840 [2024-11-20 16:22:10.368170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.368180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.368186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.368196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.368201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.368211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.368217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.368227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.368232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.368243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.840 [2024-11-20 16:22:10.368248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.368258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.368263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.368274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.368279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.368289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.840 [2024-11-20 16:22:10.368295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.368305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.840 [2024-11-20 16:22:10.368310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.368321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.368326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.368336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.840 [2024-11-20 16:22:10.368341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:36.840 [2024-11-20 16:22:10.368351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.841 [2024-11-20 16:22:10.368357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.368368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.841 [2024-11-20 16:22:10.368374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.368384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.841 [2024-11-20 16:22:10.368389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.368399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.841 [2024-11-20 16:22:10.368404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.368415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.841 [2024-11-20 16:22:10.368420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.368975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.841 [2024-11-20 16:22:10.368986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.368997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.841 [2024-11-20 16:22:10.369003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.841 [2024-11-20 16:22:10.369019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.841 [2024-11-20 16:22:10.369034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.841 [2024-11-20 16:22:10.369050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.841 [2024-11-20 16:22:10.369065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.841 [2024-11-20 16:22:10.369080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.841 [2024-11-20 16:22:10.369096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.841 [2024-11-20 16:22:10.369113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.841 [2024-11-20 16:22:10.369128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.841 [2024-11-20 16:22:10.369144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.841 [2024-11-20 16:22:10.369164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.841 [2024-11-20 16:22:10.369179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.841 [2024-11-20 16:22:10.369195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.841 [2024-11-20 16:22:10.369210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.841 [2024-11-20 16:22:10.369225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.841 [2024-11-20 16:22:10.369241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.841 [2024-11-20 16:22:10.369256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.841 [2024-11-20 16:22:10.369271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.841 [2024-11-20 16:22:10.369287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.841 [2024-11-20 16:22:10.369304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.841 [2024-11-20 16:22:10.369320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.841 [2024-11-20 16:22:10.369335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.841 [2024-11-20 16:22:10.369644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.841 [2024-11-20 16:22:10.369661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.841 [2024-11-20 16:22:10.369677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.841 [2024-11-20 16:22:10.369692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.841 [2024-11-20 16:22:10.369708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.841 [2024-11-20 16:22:10.369723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.841 [2024-11-20 16:22:10.369739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.841 [2024-11-20 16:22:10.369755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.841 [2024-11-20 16:22:10.369770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.841 [2024-11-20 16:22:10.369787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:36.841 [2024-11-20 16:22:10.369797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.841 [2024-11-20 16:22:10.369803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.369813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.842 [2024-11-20 16:22:10.369818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.369828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.369833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.369844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.369849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.369859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.842 [2024-11-20 16:22:10.369864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.369874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.369879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.369889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.842 [2024-11-20 16:22:10.369895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.369905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.369910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.369920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.369925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.369936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.369941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.370950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.370961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.370973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.370979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.370992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.370997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.371013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.842 [2024-11-20 16:22:10.371028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.842 [2024-11-20 16:22:10.371043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.842 [2024-11-20 16:22:10.371059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.371074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.371090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.371105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.842 [2024-11-20 16:22:10.371120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.842 [2024-11-20 16:22:10.371135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.371151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.842 [2024-11-20 16:22:10.371172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.371189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.371204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.842 [2024-11-20 16:22:10.371220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.371235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.371250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.371266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.371281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.371296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.842 [2024-11-20 16:22:10.371312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.842 [2024-11-20 16:22:10.371327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.842 [2024-11-20 16:22:10.371342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.371357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.842 [2024-11-20 16:22:10.371376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.842 [2024-11-20 16:22:10.371391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:36.842 [2024-11-20 16:22:10.371402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.843 [2024-11-20 16:22:10.371407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.371417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.843 [2024-11-20 16:22:10.371422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.371432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.843 [2024-11-20 16:22:10.371438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.371448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.843 [2024-11-20 16:22:10.371453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.372266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.843 [2024-11-20 16:22:10.372278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.372289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.843 [2024-11-20 16:22:10.372295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.372305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.843 [2024-11-20 16:22:10.372310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.372321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.843 [2024-11-20 16:22:10.372326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.372337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.843 [2024-11-20 16:22:10.372342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.372353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.843 [2024-11-20 16:22:10.372358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.372668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.843 [2024-11-20 16:22:10.372679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.372690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.843 [2024-11-20 16:22:10.372695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.372706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.843 [2024-11-20 16:22:10.372711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.372721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.843 [2024-11-20 16:22:10.372726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.372737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.843 [2024-11-20 16:22:10.372741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.372752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.843 [2024-11-20 16:22:10.372757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.372767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.843 [2024-11-20 16:22:10.372772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.372782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.843 [2024-11-20 16:22:10.372787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.372798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.843 [2024-11-20 16:22:10.372804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.372814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.843 [2024-11-20 16:22:10.372819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.372829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.843 [2024-11-20 16:22:10.372834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.372844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.843 [2024-11-20 16:22:10.372850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.373330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.843 [2024-11-20 16:22:10.373339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.373352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.843 [2024-11-20 16:22:10.373358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.373368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.843 [2024-11-20 16:22:10.373373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.373384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.843 [2024-11-20 16:22:10.373389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.373399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.843 [2024-11-20 16:22:10.373404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.373414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.843 [2024-11-20 16:22:10.373420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.373430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.843 [2024-11-20 16:22:10.373435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.373445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.843 [2024-11-20 16:22:10.373450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.373461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.843 [2024-11-20 16:22:10.373466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.373476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.843 [2024-11-20 16:22:10.373481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.373492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.843 [2024-11-20 16:22:10.373497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.373507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.843 [2024-11-20 16:22:10.373512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.373523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.843 [2024-11-20 16:22:10.373528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.373540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.843 [2024-11-20 16:22:10.373545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.373555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.843 [2024-11-20 16:22:10.373560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.373570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.843 [2024-11-20 16:22:10.373576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.373586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.843 [2024-11-20 16:22:10.373591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:36.843 [2024-11-20 16:22:10.373601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.844 [2024-11-20 16:22:10.373606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.373617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.844 [2024-11-20 16:22:10.373622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.373632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.844 [2024-11-20 16:22:10.373637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.373648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.844 [2024-11-20 16:22:10.373653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.373663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.844 [2024-11-20 16:22:10.373668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.373678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.844 [2024-11-20 16:22:10.373684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.373694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.844 [2024-11-20 16:22:10.373699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.373709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.844 [2024-11-20 16:22:10.373714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.373725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.844 [2024-11-20 16:22:10.373731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.373741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.844 [2024-11-20 16:22:10.373747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.373757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.844 [2024-11-20 16:22:10.373762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.374683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.844 [2024-11-20 16:22:10.374694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.374705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.844 [2024-11-20 16:22:10.374711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.374721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.844 [2024-11-20 16:22:10.374727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.374737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.844 [2024-11-20 16:22:10.374742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.374753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.844 [2024-11-20 16:22:10.374758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.374768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.844 [2024-11-20 16:22:10.374773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.374783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.844 [2024-11-20 16:22:10.374789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.374799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.844 [2024-11-20 16:22:10.374804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.374814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.844 [2024-11-20 16:22:10.374819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.374830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.844 [2024-11-20 16:22:10.374837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.375107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.844 [2024-11-20 16:22:10.375115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.375127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.844 [2024-11-20 16:22:10.375132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.375142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.844 [2024-11-20 16:22:10.375148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.375163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.844 [2024-11-20 16:22:10.375168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.375178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.844 [2024-11-20 16:22:10.375184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.375194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.844 [2024-11-20 16:22:10.375199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.375209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.844 [2024-11-20 16:22:10.375214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.375225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.844 [2024-11-20 16:22:10.375230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.375241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.844 [2024-11-20 16:22:10.375246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.375256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.844 [2024-11-20 16:22:10.375261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.375271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.844 [2024-11-20 16:22:10.375277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.375287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.844 [2024-11-20 16:22:10.375295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.375305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.844 [2024-11-20 16:22:10.375310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.375321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.844 [2024-11-20 16:22:10.375326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:36.844 [2024-11-20 16:22:10.375336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.845 [2024-11-20 16:22:10.375341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.375351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.845 [2024-11-20 16:22:10.375356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.375367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.845 [2024-11-20 16:22:10.375372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.375382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.845 [2024-11-20 16:22:10.375387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.375397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.845 [2024-11-20 16:22:10.375403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.375413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.845 [2024-11-20 16:22:10.375418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.375428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.845 [2024-11-20 16:22:10.375433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.375444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.845 [2024-11-20 16:22:10.375450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.845 [2024-11-20 16:22:10.376190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.845 [2024-11-20 16:22:10.376207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.845 [2024-11-20 16:22:10.376225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.845 [2024-11-20 16:22:10.376240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.845 [2024-11-20 16:22:10.376256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.845 [2024-11-20 16:22:10.376272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.845 [2024-11-20 16:22:10.376287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.845 [2024-11-20 16:22:10.376303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.845 [2024-11-20 16:22:10.376318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.845 [2024-11-20 16:22:10.376334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.845 [2024-11-20 16:22:10.376350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.845 [2024-11-20 16:22:10.376365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.845 [2024-11-20 16:22:10.376381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.845 [2024-11-20 16:22:10.376396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.845 [2024-11-20 16:22:10.376413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.845 [2024-11-20 16:22:10.376428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.845 [2024-11-20 16:22:10.376444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.845 [2024-11-20 16:22:10.376459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.845 [2024-11-20 16:22:10.376475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.845 [2024-11-20 16:22:10.376490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.845 [2024-11-20 16:22:10.376948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.845 [2024-11-20 16:22:10.376965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.845 [2024-11-20 16:22:10.376981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.376991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.845 [2024-11-20 16:22:10.376997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.377007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.845 [2024-11-20 16:22:10.377012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.377022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.845 [2024-11-20 16:22:10.377027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.377037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.845 [2024-11-20 16:22:10.377045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.377055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.845 [2024-11-20 16:22:10.377060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.377070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.845 [2024-11-20 16:22:10.377075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.377085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.845 [2024-11-20 16:22:10.377091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:36.845 [2024-11-20 16:22:10.377101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.845 [2024-11-20 16:22:10.377106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.377116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.846 [2024-11-20 16:22:10.377122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.377132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.846 [2024-11-20 16:22:10.377137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.377148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.846 [2024-11-20 16:22:10.377153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.846 [2024-11-20 16:22:10.378132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.846 [2024-11-20 16:22:10.378149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.846 [2024-11-20 16:22:10.378170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.846 [2024-11-20 16:22:10.378186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.846 [2024-11-20 16:22:10.378204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.846 [2024-11-20 16:22:10.378219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.846 [2024-11-20 16:22:10.378236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.846 [2024-11-20 16:22:10.378251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.846 [2024-11-20 16:22:10.378266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.846 [2024-11-20 16:22:10.378282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.846 [2024-11-20 16:22:10.378297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.846 [2024-11-20 16:22:10.378313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.846 [2024-11-20 16:22:10.378328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.846 [2024-11-20 16:22:10.378344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.846 [2024-11-20 16:22:10.378359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.846 [2024-11-20 16:22:10.378375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.846 [2024-11-20 16:22:10.378390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.846 [2024-11-20 16:22:10.378407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.846 [2024-11-20 16:22:10.378422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.846 [2024-11-20 16:22:10.378438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.846 [2024-11-20 16:22:10.378453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.846 [2024-11-20 16:22:10.378468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.846 [2024-11-20 16:22:10.378484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.846 [2024-11-20 16:22:10.378499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.846 [2024-11-20 16:22:10.378515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.846 [2024-11-20 16:22:10.378530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.846 [2024-11-20 16:22:10.378545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.846 [2024-11-20 16:22:10.378561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.846 [2024-11-20 16:22:10.378577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.846 [2024-11-20 16:22:10.378593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.846 [2024-11-20 16:22:10.378609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.846 [2024-11-20 16:22:10.378624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.846 [2024-11-20 16:22:10.378640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.378651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.846 [2024-11-20 16:22:10.378656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.379238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.846 [2024-11-20 16:22:10.379250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:36.846 [2024-11-20 16:22:10.379271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.846 [2024-11-20 16:22:10.379277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:36.847 [2024-11-20 16:22:10.379288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.847 [2024-11-20 16:22:10.379293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:36.847 [2024-11-20 16:22:10.379303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.847 [2024-11-20 16:22:10.379309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:36.847 [2024-11-20 16:22:10.379319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.847 [2024-11-20 16:22:10.379324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:36.847 [2024-11-20 16:22:10.380121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.847 [2024-11-20 16:22:10.380133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:36.847 [2024-11-20 16:22:10.380144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.847 [2024-11-20 16:22:10.380149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:36.847 [2024-11-20 16:22:10.380164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.847 [2024-11-20 16:22:10.380173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:36.847 [2024-11-20 16:22:10.380183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.847 [2024-11-20 16:22:10.380188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:36.847 [2024-11-20 16:22:10.380198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.847 [2024-11-20 16:22:10.380203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:36.847 [2024-11-20 16:22:10.380213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.847 [2024-11-20 16:22:10.380218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:36.847 [2024-11-20 16:22:10.380228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.847 [2024-11-20 16:22:10.380233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:36.847 [2024-11-20 16:22:10.380243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.847 [2024-11-20 16:22:10.380249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:36.847 [2024-11-20 16:22:10.380259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.847 [2024-11-20 16:22:10.380264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:36.847 [2024-11-20 16:22:10.380274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.847 [2024-11-20 16:22:10.380279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.847 [2024-11-20 16:22:10.380289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.847 [2024-11-20 16:22:10.380294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:36.847 [2024-11-20 16:22:10.380304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.847 [2024-11-20 16:22:10.380310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:36.847 11666.80 IOPS, 45.57 MiB/s [2024-11-20T15:22:12.783Z] 11718.00 IOPS, 45.77 MiB/s [2024-11-20T15:22:12.783Z] Received shutdown signal, test time was about 26.912753 seconds 00:27:36.847 00:27:36.847 Latency(us) 00:27:36.847 [2024-11-20T15:22:12.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.847 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:36.847 Verification LBA range: start 0x0 length 0x4000 00:27:36.847 Nvme0n1 : 26.91 11750.64 45.90 0.00 0.00 10874.08 856.75 3453310.29 00:27:36.847 [2024-11-20T15:22:12.783Z] =================================================================================================================== 00:27:36.847 [2024-11-20T15:22:12.783Z] Total : 11750.64 45.90 0.00 0.00 10874.08 856.75 3453310.29 00:27:36.847 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:37.108 rmmod nvme_tcp 00:27:37.108 rmmod nvme_fabrics 00:27:37.108 rmmod nvme_keyring 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1412001 ']' 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1412001 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1412001 ']' 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1412001 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1412001 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1412001' 00:27:37.108 killing process with pid 1412001 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1412001 00:27:37.108 16:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1412001 00:27:37.369 16:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:37.369 16:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:37.369 16:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:37.369 16:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:27:37.369 16:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:27:37.369 16:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:37.369 16:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:27:37.369 16:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:37.369 16:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:37.369 16:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.369 16:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:37.369 16:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.279 16:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:39.279 00:27:39.279 real 0m41.445s 00:27:39.279 user 1m46.968s 00:27:39.279 sys 0m11.569s 00:27:39.279 16:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:39.279 16:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:39.279 ************************************ 00:27:39.279 END TEST nvmf_host_multipath_status 00:27:39.279 ************************************ 00:27:39.279 16:22:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:39.279 16:22:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:39.279 16:22:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:39.279 16:22:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.540 ************************************ 00:27:39.540 START TEST nvmf_discovery_remove_ifc 00:27:39.540 ************************************ 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:39.540 * Looking for test storage... 00:27:39.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:39.540 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:39.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.541 --rc genhtml_branch_coverage=1 00:27:39.541 --rc genhtml_function_coverage=1 00:27:39.541 --rc genhtml_legend=1 00:27:39.541 --rc geninfo_all_blocks=1 00:27:39.541 --rc geninfo_unexecuted_blocks=1 00:27:39.541 00:27:39.541 ' 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:39.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.541 --rc genhtml_branch_coverage=1 00:27:39.541 --rc genhtml_function_coverage=1 00:27:39.541 --rc genhtml_legend=1 00:27:39.541 --rc geninfo_all_blocks=1 00:27:39.541 --rc geninfo_unexecuted_blocks=1 00:27:39.541 00:27:39.541 ' 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:39.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.541 --rc genhtml_branch_coverage=1 00:27:39.541 --rc genhtml_function_coverage=1 00:27:39.541 --rc genhtml_legend=1 00:27:39.541 --rc geninfo_all_blocks=1 00:27:39.541 --rc geninfo_unexecuted_blocks=1 00:27:39.541 00:27:39.541 ' 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:39.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.541 --rc genhtml_branch_coverage=1 00:27:39.541 --rc genhtml_function_coverage=1 00:27:39.541 --rc genhtml_legend=1 00:27:39.541 --rc geninfo_all_blocks=1 00:27:39.541 --rc geninfo_unexecuted_blocks=1 00:27:39.541 00:27:39.541 ' 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:39.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:27:39.541 16:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.682 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:47.682 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:27:47.682 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:47.682 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:47.682 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:47.682 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:47.682 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:47.682 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:27:47.682 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:47.682 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:27:47.682 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:27:47.682 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:27:47.682 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:27:47.682 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:27:47.682 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:27:47.682 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:47.682 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:47.682 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:47.683 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:47.683 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:47.683 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:47.683 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:47.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:27:47.683 00:27:47.683 --- 10.0.0.2 ping statistics --- 00:27:47.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.683 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:47.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:27:47.683 00:27:47.683 --- 10.0.0.1 ping statistics --- 00:27:47.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.683 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1422319 00:27:47.683 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1422319 00:27:47.684 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:47.684 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1422319 ']' 00:27:47.684 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.684 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:47.684 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.684 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:47.684 16:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.684 [2024-11-20 16:22:22.979940] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:27:47.684 [2024-11-20 16:22:22.980005] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.684 [2024-11-20 16:22:23.082957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.684 [2024-11-20 16:22:23.133932] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.684 [2024-11-20 16:22:23.133987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.684 [2024-11-20 16:22:23.133996] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.684 [2024-11-20 16:22:23.134009] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.684 [2024-11-20 16:22:23.134015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.684 [2024-11-20 16:22:23.134825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.944 16:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:47.944 16:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:47.944 16:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:47.944 16:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:47.944 16:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.944 16:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.944 16:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:47.944 16:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.944 16:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.944 [2024-11-20 16:22:23.864980] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.944 [2024-11-20 16:22:23.873289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:48.205 null0 00:27:48.205 [2024-11-20 16:22:23.905194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:48.205 16:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.205 16:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1422608 00:27:48.206 16:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1422608 /tmp/host.sock 00:27:48.206 16:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1422608 ']' 00:27:48.206 16:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:48.206 16:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:48.206 16:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:48.206 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:48.206 16:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:48.206 16:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:48.206 16:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:48.206 [2024-11-20 16:22:23.981733] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:27:48.206 [2024-11-20 16:22:23.981802] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422608 ] 00:27:48.206 [2024-11-20 16:22:24.077131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.206 [2024-11-20 16:22:24.129587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.148 16:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:49.148 16:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:49.148 16:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:49.148 16:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:49.148 16:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.148 16:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:49.148 16:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.148 16:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:49.148 16:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.148 16:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:49.148 16:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.148 16:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:49.148 16:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.148 16:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:50.089 [2024-11-20 16:22:25.963392] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:50.089 [2024-11-20 16:22:25.963413] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:50.089 [2024-11-20 16:22:25.963430] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:50.350 [2024-11-20 16:22:26.049712] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:50.350 [2024-11-20 16:22:26.265897] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:50.350 [2024-11-20 16:22:26.266868] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1bce410:1 started. 00:27:50.350 [2024-11-20 16:22:26.268463] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:50.350 [2024-11-20 16:22:26.268507] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:50.350 [2024-11-20 16:22:26.268529] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:50.350 [2024-11-20 16:22:26.268544] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:50.350 [2024-11-20 16:22:26.268565] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:50.350 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.350 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:50.350 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:50.350 [2024-11-20 16:22:26.273004] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1bce410 was disconnected and freed. delete nvme_qpair. 00:27:50.350 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:50.350 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:50.350 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.350 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:50.350 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:50.350 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:50.611 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.611 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:50.611 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:50.611 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:50.611 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:50.611 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:50.611 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:50.611 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:50.611 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.611 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:50.611 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:50.611 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:50.611 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.611 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:50.611 16:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:51.994 16:22:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:51.994 16:22:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:51.994 16:22:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:51.994 16:22:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.994 16:22:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:51.994 16:22:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:51.994 16:22:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:51.994 16:22:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.994 16:22:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:51.994 16:22:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:52.936 16:22:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:52.936 16:22:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:52.936 16:22:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:52.936 16:22:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.936 16:22:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:52.936 16:22:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:52.936 16:22:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:52.936 16:22:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.936 16:22:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:52.936 16:22:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:53.877 16:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:53.877 16:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:53.877 16:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:53.877 16:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.877 16:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:53.877 16:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:53.877 16:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:53.877 16:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.877 16:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:53.877 16:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:54.819 16:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:54.819 16:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:54.819 16:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:54.819 16:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.819 16:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:54.819 16:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:54.819 16:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:54.819 16:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.819 16:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:54.819 16:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:56.202 [2024-11-20 16:22:31.709078] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:56.202 [2024-11-20 16:22:31.709117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.202 [2024-11-20 16:22:31.709128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.202 [2024-11-20 16:22:31.709136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.202 [2024-11-20 16:22:31.709141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.202 [2024-11-20 16:22:31.709147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.202 [2024-11-20 16:22:31.709153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.202 [2024-11-20 16:22:31.709162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.202 [2024-11-20 16:22:31.709167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.202 [2024-11-20 16:22:31.709174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.203 [2024-11-20 16:22:31.709179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.203 [2024-11-20 16:22:31.709185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baac00 is same with the state(6) to be set 00:27:56.203 16:22:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:56.203 16:22:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:56.203 16:22:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:56.203 16:22:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.203 16:22:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:56.203 16:22:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:56.203 16:22:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:56.203 [2024-11-20 16:22:31.719098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baac00 (9): Bad file descriptor 00:27:56.203 [2024-11-20 16:22:31.729132] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:56.203 [2024-11-20 16:22:31.729142] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:56.203 [2024-11-20 16:22:31.729146] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:56.203 [2024-11-20 16:22:31.729150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:56.203 [2024-11-20 16:22:31.729171] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:56.203 16:22:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.203 16:22:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:56.203 16:22:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:57.144 16:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:57.144 16:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:57.144 16:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:57.144 16:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.144 16:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:57.144 16:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:57.144 16:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:57.144 [2024-11-20 16:22:32.795215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:57.144 [2024-11-20 16:22:32.795310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baac00 with addr=10.0.0.2, port=4420 00:27:57.144 [2024-11-20 16:22:32.795341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baac00 is same with the state(6) to be set 00:27:57.144 [2024-11-20 16:22:32.795399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baac00 (9): Bad file descriptor 00:27:57.144 [2024-11-20 16:22:32.796516] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:57.144 [2024-11-20 16:22:32.796587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:57.144 [2024-11-20 16:22:32.796609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:57.144 [2024-11-20 16:22:32.796632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:57.144 [2024-11-20 16:22:32.796653] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:57.144 [2024-11-20 16:22:32.796669] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:57.144 [2024-11-20 16:22:32.796683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:57.144 [2024-11-20 16:22:32.796717] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:57.144 [2024-11-20 16:22:32.796733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:57.144 16:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.144 16:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:57.144 16:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:58.086 [2024-11-20 16:22:33.799155] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:58.086 [2024-11-20 16:22:33.799174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:58.086 [2024-11-20 16:22:33.799184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:58.086 [2024-11-20 16:22:33.799189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:58.086 [2024-11-20 16:22:33.799195] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:58.086 [2024-11-20 16:22:33.799200] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:58.086 [2024-11-20 16:22:33.799204] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:58.086 [2024-11-20 16:22:33.799207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:58.086 [2024-11-20 16:22:33.799224] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:58.086 [2024-11-20 16:22:33.799241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.086 [2024-11-20 16:22:33.799249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.086 [2024-11-20 16:22:33.799256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.086 [2024-11-20 16:22:33.799262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.086 [2024-11-20 16:22:33.799268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.086 [2024-11-20 16:22:33.799274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.086 [2024-11-20 16:22:33.799280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.086 [2024-11-20 16:22:33.799286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.086 [2024-11-20 16:22:33.799291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.086 [2024-11-20 16:22:33.799296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.086 [2024-11-20 16:22:33.799302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:58.086 [2024-11-20 16:22:33.799708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9a340 (9): Bad file descriptor 00:27:58.086 [2024-11-20 16:22:33.800718] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:58.086 [2024-11-20 16:22:33.800728] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:58.086 16:22:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:58.086 16:22:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:58.086 16:22:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:58.086 16:22:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.086 16:22:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:58.086 16:22:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:58.087 16:22:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:58.087 16:22:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.087 16:22:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:58.087 16:22:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.087 16:22:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.087 16:22:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:58.087 16:22:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:58.087 16:22:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:58.087 16:22:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:58.087 16:22:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.087 16:22:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:58.087 16:22:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:58.087 16:22:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:58.087 16:22:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.087 16:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:58.346 16:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:59.287 16:22:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:59.287 16:22:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:59.287 16:22:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:59.287 16:22:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.287 16:22:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:59.287 16:22:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:59.287 16:22:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:59.287 16:22:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.287 16:22:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:59.287 16:22:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:00.229 [2024-11-20 16:22:35.860254] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:00.229 [2024-11-20 16:22:35.860267] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:00.229 [2024-11-20 16:22:35.860276] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:00.229 [2024-11-20 16:22:35.988658] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:00.229 16:22:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:00.229 16:22:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:00.229 16:22:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:00.229 16:22:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.229 16:22:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:00.229 16:22:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:00.229 16:22:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:00.229 16:22:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.229 16:22:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:00.229 16:22:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:00.490 [2024-11-20 16:22:36.167673] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:28:00.490 [2024-11-20 16:22:36.168526] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1b9f1f0:1 started. 00:28:00.490 [2024-11-20 16:22:36.169435] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:00.490 [2024-11-20 16:22:36.169463] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:00.490 [2024-11-20 16:22:36.169479] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:00.490 [2024-11-20 16:22:36.169490] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:00.490 [2024-11-20 16:22:36.169495] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:00.490 [2024-11-20 16:22:36.177275] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1b9f1f0 was disconnected and freed. delete nvme_qpair. 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1422608 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1422608 ']' 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1422608 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1422608 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1422608' 00:28:01.432 killing process with pid 1422608 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1422608 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1422608 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:01.432 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:01.432 rmmod nvme_tcp 00:28:01.693 rmmod nvme_fabrics 00:28:01.693 rmmod nvme_keyring 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1422319 ']' 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1422319 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1422319 ']' 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1422319 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1422319 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1422319' 00:28:01.693 killing process with pid 1422319 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1422319 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1422319 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:01.693 16:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.240 16:22:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:04.240 00:28:04.240 real 0m24.470s 00:28:04.240 user 0m29.748s 00:28:04.240 sys 0m7.063s 00:28:04.240 16:22:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:04.240 16:22:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:04.240 ************************************ 00:28:04.240 END TEST nvmf_discovery_remove_ifc 00:28:04.240 ************************************ 00:28:04.240 16:22:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:04.240 16:22:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:04.240 16:22:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:04.240 16:22:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.240 ************************************ 00:28:04.240 START TEST nvmf_identify_kernel_target 00:28:04.240 ************************************ 00:28:04.240 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:04.240 * Looking for test storage... 00:28:04.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:04.240 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:04.240 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:04.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.241 --rc genhtml_branch_coverage=1 00:28:04.241 --rc genhtml_function_coverage=1 00:28:04.241 --rc genhtml_legend=1 00:28:04.241 --rc geninfo_all_blocks=1 00:28:04.241 --rc geninfo_unexecuted_blocks=1 00:28:04.241 00:28:04.241 ' 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:04.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.241 --rc genhtml_branch_coverage=1 00:28:04.241 --rc genhtml_function_coverage=1 00:28:04.241 --rc genhtml_legend=1 00:28:04.241 --rc geninfo_all_blocks=1 00:28:04.241 --rc geninfo_unexecuted_blocks=1 00:28:04.241 00:28:04.241 ' 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:04.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.241 --rc genhtml_branch_coverage=1 00:28:04.241 --rc genhtml_function_coverage=1 00:28:04.241 --rc genhtml_legend=1 00:28:04.241 --rc geninfo_all_blocks=1 00:28:04.241 --rc geninfo_unexecuted_blocks=1 00:28:04.241 00:28:04.241 ' 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:04.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.241 --rc genhtml_branch_coverage=1 00:28:04.241 --rc genhtml_function_coverage=1 00:28:04.241 --rc genhtml_legend=1 00:28:04.241 --rc geninfo_all_blocks=1 00:28:04.241 --rc geninfo_unexecuted_blocks=1 00:28:04.241 00:28:04.241 ' 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:04.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:04.241 16:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:04.242 16:22:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:04.242 16:22:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:04.242 16:22:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:04.242 16:22:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:04.242 16:22:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:04.242 16:22:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:04.242 16:22:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.242 16:22:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:04.242 16:22:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.242 16:22:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:04.242 16:22:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:04.242 16:22:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:28:04.242 16:22:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:12.387 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:12.387 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:12.387 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:12.387 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:12.387 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:12.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:12.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:28:12.388 00:28:12.388 --- 10.0.0.2 ping statistics --- 00:28:12.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.388 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:12.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:12.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:28:12.388 00:28:12.388 --- 10.0.0.1 ping statistics --- 00:28:12.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.388 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:12.388 16:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:15.688 Waiting for block devices as requested 00:28:15.688 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:15.688 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:15.688 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:15.688 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:15.688 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:15.688 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:15.688 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:15.688 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:15.949 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:16.210 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:16.210 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:16.210 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:16.210 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:16.470 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:16.470 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:16.470 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:16.732 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:16.994 No valid GPT data, bailing 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:16.994 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:28:17.256 00:28:17.256 Discovery Log Number of Records 2, Generation counter 2 00:28:17.256 =====Discovery Log Entry 0====== 00:28:17.256 trtype: tcp 00:28:17.256 adrfam: ipv4 00:28:17.256 subtype: current discovery subsystem 00:28:17.256 treq: not specified, sq flow control disable supported 00:28:17.256 portid: 1 00:28:17.256 trsvcid: 4420 00:28:17.256 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:17.256 traddr: 10.0.0.1 00:28:17.256 eflags: none 00:28:17.256 sectype: none 00:28:17.256 =====Discovery Log Entry 1====== 00:28:17.256 trtype: tcp 00:28:17.256 adrfam: ipv4 00:28:17.256 subtype: nvme subsystem 00:28:17.256 treq: not specified, sq flow control disable supported 00:28:17.256 portid: 1 00:28:17.256 trsvcid: 4420 00:28:17.256 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:17.256 traddr: 10.0.0.1 00:28:17.256 eflags: none 00:28:17.256 sectype: none 00:28:17.256 16:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:17.256 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:17.256 ===================================================== 00:28:17.256 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:17.256 ===================================================== 00:28:17.256 Controller Capabilities/Features 00:28:17.256 ================================ 00:28:17.256 Vendor ID: 0000 00:28:17.256 Subsystem Vendor ID: 0000 00:28:17.256 Serial Number: b81dbfdfc66bc878587f 00:28:17.256 Model Number: Linux 00:28:17.256 Firmware Version: 6.8.9-20 00:28:17.256 Recommended Arb Burst: 0 00:28:17.256 IEEE OUI Identifier: 00 00 00 00:28:17.256 Multi-path I/O 00:28:17.256 May have multiple subsystem ports: No 00:28:17.256 May have multiple controllers: No 00:28:17.256 Associated with SR-IOV VF: No 00:28:17.256 Max Data Transfer Size: Unlimited 00:28:17.256 Max Number of Namespaces: 0 00:28:17.256 Max Number of I/O Queues: 1024 00:28:17.256 NVMe Specification Version (VS): 1.3 00:28:17.256 NVMe Specification Version (Identify): 1.3 00:28:17.256 Maximum Queue Entries: 1024 00:28:17.256 Contiguous Queues Required: No 00:28:17.256 Arbitration Mechanisms Supported 00:28:17.256 Weighted Round Robin: Not Supported 00:28:17.256 Vendor Specific: Not Supported 00:28:17.256 Reset Timeout: 7500 ms 00:28:17.256 Doorbell Stride: 4 bytes 00:28:17.256 NVM Subsystem Reset: Not Supported 00:28:17.256 Command Sets Supported 00:28:17.256 NVM Command Set: Supported 00:28:17.256 Boot Partition: Not Supported 00:28:17.256 Memory Page Size Minimum: 4096 bytes 00:28:17.256 Memory Page Size Maximum: 4096 bytes 00:28:17.256 Persistent Memory Region: Not Supported 00:28:17.256 Optional Asynchronous Events Supported 00:28:17.256 Namespace Attribute Notices: Not Supported 00:28:17.256 Firmware Activation Notices: Not Supported 00:28:17.256 ANA Change Notices: Not Supported 00:28:17.256 PLE Aggregate Log Change Notices: Not Supported 00:28:17.256 LBA Status Info Alert Notices: Not Supported 00:28:17.256 EGE Aggregate Log Change Notices: Not Supported 00:28:17.256 Normal NVM Subsystem Shutdown event: Not Supported 00:28:17.256 Zone Descriptor Change Notices: Not Supported 00:28:17.256 Discovery Log Change Notices: Supported 00:28:17.256 Controller Attributes 00:28:17.256 128-bit Host Identifier: Not Supported 00:28:17.256 Non-Operational Permissive Mode: Not Supported 00:28:17.256 NVM Sets: Not Supported 00:28:17.256 Read Recovery Levels: Not Supported 00:28:17.256 Endurance Groups: Not Supported 00:28:17.256 Predictable Latency Mode: Not Supported 00:28:17.256 Traffic Based Keep ALive: Not Supported 00:28:17.256 Namespace Granularity: Not Supported 00:28:17.256 SQ Associations: Not Supported 00:28:17.256 UUID List: Not Supported 00:28:17.256 Multi-Domain Subsystem: Not Supported 00:28:17.256 Fixed Capacity Management: Not Supported 00:28:17.256 Variable Capacity Management: Not Supported 00:28:17.256 Delete Endurance Group: Not Supported 00:28:17.256 Delete NVM Set: Not Supported 00:28:17.256 Extended LBA Formats Supported: Not Supported 00:28:17.256 Flexible Data Placement Supported: Not Supported 00:28:17.256 00:28:17.256 Controller Memory Buffer Support 00:28:17.256 ================================ 00:28:17.256 Supported: No 00:28:17.256 00:28:17.256 Persistent Memory Region Support 00:28:17.256 ================================ 00:28:17.256 Supported: No 00:28:17.256 00:28:17.256 Admin Command Set Attributes 00:28:17.256 ============================ 00:28:17.256 Security Send/Receive: Not Supported 00:28:17.256 Format NVM: Not Supported 00:28:17.256 Firmware Activate/Download: Not Supported 00:28:17.256 Namespace Management: Not Supported 00:28:17.256 Device Self-Test: Not Supported 00:28:17.256 Directives: Not Supported 00:28:17.256 NVMe-MI: Not Supported 00:28:17.256 Virtualization Management: Not Supported 00:28:17.256 Doorbell Buffer Config: Not Supported 00:28:17.256 Get LBA Status Capability: Not Supported 00:28:17.256 Command & Feature Lockdown Capability: Not Supported 00:28:17.256 Abort Command Limit: 1 00:28:17.256 Async Event Request Limit: 1 00:28:17.256 Number of Firmware Slots: N/A 00:28:17.256 Firmware Slot 1 Read-Only: N/A 00:28:17.256 Firmware Activation Without Reset: N/A 00:28:17.256 Multiple Update Detection Support: N/A 00:28:17.256 Firmware Update Granularity: No Information Provided 00:28:17.256 Per-Namespace SMART Log: No 00:28:17.256 Asymmetric Namespace Access Log Page: Not Supported 00:28:17.256 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:17.256 Command Effects Log Page: Not Supported 00:28:17.256 Get Log Page Extended Data: Supported 00:28:17.257 Telemetry Log Pages: Not Supported 00:28:17.257 Persistent Event Log Pages: Not Supported 00:28:17.257 Supported Log Pages Log Page: May Support 00:28:17.257 Commands Supported & Effects Log Page: Not Supported 00:28:17.257 Feature Identifiers & Effects Log Page:May Support 00:28:17.257 NVMe-MI Commands & Effects Log Page: May Support 00:28:17.257 Data Area 4 for Telemetry Log: Not Supported 00:28:17.257 Error Log Page Entries Supported: 1 00:28:17.257 Keep Alive: Not Supported 00:28:17.257 00:28:17.257 NVM Command Set Attributes 00:28:17.257 ========================== 00:28:17.257 Submission Queue Entry Size 00:28:17.257 Max: 1 00:28:17.257 Min: 1 00:28:17.257 Completion Queue Entry Size 00:28:17.257 Max: 1 00:28:17.257 Min: 1 00:28:17.257 Number of Namespaces: 0 00:28:17.257 Compare Command: Not Supported 00:28:17.257 Write Uncorrectable Command: Not Supported 00:28:17.257 Dataset Management Command: Not Supported 00:28:17.257 Write Zeroes Command: Not Supported 00:28:17.257 Set Features Save Field: Not Supported 00:28:17.257 Reservations: Not Supported 00:28:17.257 Timestamp: Not Supported 00:28:17.257 Copy: Not Supported 00:28:17.257 Volatile Write Cache: Not Present 00:28:17.257 Atomic Write Unit (Normal): 1 00:28:17.257 Atomic Write Unit (PFail): 1 00:28:17.257 Atomic Compare & Write Unit: 1 00:28:17.257 Fused Compare & Write: Not Supported 00:28:17.257 Scatter-Gather List 00:28:17.257 SGL Command Set: Supported 00:28:17.257 SGL Keyed: Not Supported 00:28:17.257 SGL Bit Bucket Descriptor: Not Supported 00:28:17.257 SGL Metadata Pointer: Not Supported 00:28:17.257 Oversized SGL: Not Supported 00:28:17.257 SGL Metadata Address: Not Supported 00:28:17.257 SGL Offset: Supported 00:28:17.257 Transport SGL Data Block: Not Supported 00:28:17.257 Replay Protected Memory Block: Not Supported 00:28:17.257 00:28:17.257 Firmware Slot Information 00:28:17.257 ========================= 00:28:17.257 Active slot: 0 00:28:17.257 00:28:17.257 00:28:17.257 Error Log 00:28:17.257 ========= 00:28:17.257 00:28:17.257 Active Namespaces 00:28:17.257 ================= 00:28:17.257 Discovery Log Page 00:28:17.257 ================== 00:28:17.257 Generation Counter: 2 00:28:17.257 Number of Records: 2 00:28:17.257 Record Format: 0 00:28:17.257 00:28:17.257 Discovery Log Entry 0 00:28:17.257 ---------------------- 00:28:17.257 Transport Type: 3 (TCP) 00:28:17.257 Address Family: 1 (IPv4) 00:28:17.257 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:17.257 Entry Flags: 00:28:17.257 Duplicate Returned Information: 0 00:28:17.257 Explicit Persistent Connection Support for Discovery: 0 00:28:17.257 Transport Requirements: 00:28:17.257 Secure Channel: Not Specified 00:28:17.257 Port ID: 1 (0x0001) 00:28:17.257 Controller ID: 65535 (0xffff) 00:28:17.257 Admin Max SQ Size: 32 00:28:17.257 Transport Service Identifier: 4420 00:28:17.257 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:17.257 Transport Address: 10.0.0.1 00:28:17.257 Discovery Log Entry 1 00:28:17.257 ---------------------- 00:28:17.257 Transport Type: 3 (TCP) 00:28:17.257 Address Family: 1 (IPv4) 00:28:17.257 Subsystem Type: 2 (NVM Subsystem) 00:28:17.257 Entry Flags: 00:28:17.257 Duplicate Returned Information: 0 00:28:17.257 Explicit Persistent Connection Support for Discovery: 0 00:28:17.257 Transport Requirements: 00:28:17.257 Secure Channel: Not Specified 00:28:17.257 Port ID: 1 (0x0001) 00:28:17.257 Controller ID: 65535 (0xffff) 00:28:17.257 Admin Max SQ Size: 32 00:28:17.257 Transport Service Identifier: 4420 00:28:17.257 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:17.257 Transport Address: 10.0.0.1 00:28:17.257 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:17.519 get_feature(0x01) failed 00:28:17.519 get_feature(0x02) failed 00:28:17.519 get_feature(0x04) failed 00:28:17.519 ===================================================== 00:28:17.519 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:17.519 ===================================================== 00:28:17.519 Controller Capabilities/Features 00:28:17.519 ================================ 00:28:17.519 Vendor ID: 0000 00:28:17.519 Subsystem Vendor ID: 0000 00:28:17.519 Serial Number: 1becca668a7319f62cd2 00:28:17.519 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:17.519 Firmware Version: 6.8.9-20 00:28:17.519 Recommended Arb Burst: 6 00:28:17.519 IEEE OUI Identifier: 00 00 00 00:28:17.519 Multi-path I/O 00:28:17.519 May have multiple subsystem ports: Yes 00:28:17.519 May have multiple controllers: Yes 00:28:17.519 Associated with SR-IOV VF: No 00:28:17.519 Max Data Transfer Size: Unlimited 00:28:17.520 Max Number of Namespaces: 1024 00:28:17.520 Max Number of I/O Queues: 128 00:28:17.520 NVMe Specification Version (VS): 1.3 00:28:17.520 NVMe Specification Version (Identify): 1.3 00:28:17.520 Maximum Queue Entries: 1024 00:28:17.520 Contiguous Queues Required: No 00:28:17.520 Arbitration Mechanisms Supported 00:28:17.520 Weighted Round Robin: Not Supported 00:28:17.520 Vendor Specific: Not Supported 00:28:17.520 Reset Timeout: 7500 ms 00:28:17.520 Doorbell Stride: 4 bytes 00:28:17.520 NVM Subsystem Reset: Not Supported 00:28:17.520 Command Sets Supported 00:28:17.520 NVM Command Set: Supported 00:28:17.520 Boot Partition: Not Supported 00:28:17.520 Memory Page Size Minimum: 4096 bytes 00:28:17.520 Memory Page Size Maximum: 4096 bytes 00:28:17.520 Persistent Memory Region: Not Supported 00:28:17.520 Optional Asynchronous Events Supported 00:28:17.520 Namespace Attribute Notices: Supported 00:28:17.520 Firmware Activation Notices: Not Supported 00:28:17.520 ANA Change Notices: Supported 00:28:17.520 PLE Aggregate Log Change Notices: Not Supported 00:28:17.520 LBA Status Info Alert Notices: Not Supported 00:28:17.520 EGE Aggregate Log Change Notices: Not Supported 00:28:17.520 Normal NVM Subsystem Shutdown event: Not Supported 00:28:17.520 Zone Descriptor Change Notices: Not Supported 00:28:17.520 Discovery Log Change Notices: Not Supported 00:28:17.520 Controller Attributes 00:28:17.520 128-bit Host Identifier: Supported 00:28:17.520 Non-Operational Permissive Mode: Not Supported 00:28:17.520 NVM Sets: Not Supported 00:28:17.520 Read Recovery Levels: Not Supported 00:28:17.520 Endurance Groups: Not Supported 00:28:17.520 Predictable Latency Mode: Not Supported 00:28:17.520 Traffic Based Keep ALive: Supported 00:28:17.520 Namespace Granularity: Not Supported 00:28:17.520 SQ Associations: Not Supported 00:28:17.520 UUID List: Not Supported 00:28:17.520 Multi-Domain Subsystem: Not Supported 00:28:17.520 Fixed Capacity Management: Not Supported 00:28:17.520 Variable Capacity Management: Not Supported 00:28:17.520 Delete Endurance Group: Not Supported 00:28:17.520 Delete NVM Set: Not Supported 00:28:17.520 Extended LBA Formats Supported: Not Supported 00:28:17.520 Flexible Data Placement Supported: Not Supported 00:28:17.520 00:28:17.520 Controller Memory Buffer Support 00:28:17.520 ================================ 00:28:17.520 Supported: No 00:28:17.520 00:28:17.520 Persistent Memory Region Support 00:28:17.520 ================================ 00:28:17.520 Supported: No 00:28:17.520 00:28:17.520 Admin Command Set Attributes 00:28:17.520 ============================ 00:28:17.520 Security Send/Receive: Not Supported 00:28:17.520 Format NVM: Not Supported 00:28:17.520 Firmware Activate/Download: Not Supported 00:28:17.520 Namespace Management: Not Supported 00:28:17.520 Device Self-Test: Not Supported 00:28:17.520 Directives: Not Supported 00:28:17.520 NVMe-MI: Not Supported 00:28:17.520 Virtualization Management: Not Supported 00:28:17.520 Doorbell Buffer Config: Not Supported 00:28:17.520 Get LBA Status Capability: Not Supported 00:28:17.520 Command & Feature Lockdown Capability: Not Supported 00:28:17.520 Abort Command Limit: 4 00:28:17.520 Async Event Request Limit: 4 00:28:17.520 Number of Firmware Slots: N/A 00:28:17.520 Firmware Slot 1 Read-Only: N/A 00:28:17.520 Firmware Activation Without Reset: N/A 00:28:17.520 Multiple Update Detection Support: N/A 00:28:17.520 Firmware Update Granularity: No Information Provided 00:28:17.520 Per-Namespace SMART Log: Yes 00:28:17.520 Asymmetric Namespace Access Log Page: Supported 00:28:17.520 ANA Transition Time : 10 sec 00:28:17.520 00:28:17.520 Asymmetric Namespace Access Capabilities 00:28:17.520 ANA Optimized State : Supported 00:28:17.520 ANA Non-Optimized State : Supported 00:28:17.520 ANA Inaccessible State : Supported 00:28:17.520 ANA Persistent Loss State : Supported 00:28:17.520 ANA Change State : Supported 00:28:17.520 ANAGRPID is not changed : No 00:28:17.520 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:17.520 00:28:17.520 ANA Group Identifier Maximum : 128 00:28:17.520 Number of ANA Group Identifiers : 128 00:28:17.520 Max Number of Allowed Namespaces : 1024 00:28:17.520 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:17.520 Command Effects Log Page: Supported 00:28:17.520 Get Log Page Extended Data: Supported 00:28:17.520 Telemetry Log Pages: Not Supported 00:28:17.520 Persistent Event Log Pages: Not Supported 00:28:17.520 Supported Log Pages Log Page: May Support 00:28:17.520 Commands Supported & Effects Log Page: Not Supported 00:28:17.520 Feature Identifiers & Effects Log Page:May Support 00:28:17.520 NVMe-MI Commands & Effects Log Page: May Support 00:28:17.520 Data Area 4 for Telemetry Log: Not Supported 00:28:17.520 Error Log Page Entries Supported: 128 00:28:17.520 Keep Alive: Supported 00:28:17.520 Keep Alive Granularity: 1000 ms 00:28:17.520 00:28:17.520 NVM Command Set Attributes 00:28:17.520 ========================== 00:28:17.520 Submission Queue Entry Size 00:28:17.520 Max: 64 00:28:17.520 Min: 64 00:28:17.520 Completion Queue Entry Size 00:28:17.520 Max: 16 00:28:17.520 Min: 16 00:28:17.520 Number of Namespaces: 1024 00:28:17.520 Compare Command: Not Supported 00:28:17.520 Write Uncorrectable Command: Not Supported 00:28:17.520 Dataset Management Command: Supported 00:28:17.520 Write Zeroes Command: Supported 00:28:17.520 Set Features Save Field: Not Supported 00:28:17.520 Reservations: Not Supported 00:28:17.520 Timestamp: Not Supported 00:28:17.520 Copy: Not Supported 00:28:17.520 Volatile Write Cache: Present 00:28:17.520 Atomic Write Unit (Normal): 1 00:28:17.520 Atomic Write Unit (PFail): 1 00:28:17.520 Atomic Compare & Write Unit: 1 00:28:17.520 Fused Compare & Write: Not Supported 00:28:17.520 Scatter-Gather List 00:28:17.520 SGL Command Set: Supported 00:28:17.520 SGL Keyed: Not Supported 00:28:17.520 SGL Bit Bucket Descriptor: Not Supported 00:28:17.520 SGL Metadata Pointer: Not Supported 00:28:17.520 Oversized SGL: Not Supported 00:28:17.520 SGL Metadata Address: Not Supported 00:28:17.520 SGL Offset: Supported 00:28:17.520 Transport SGL Data Block: Not Supported 00:28:17.520 Replay Protected Memory Block: Not Supported 00:28:17.520 00:28:17.520 Firmware Slot Information 00:28:17.520 ========================= 00:28:17.520 Active slot: 0 00:28:17.520 00:28:17.520 Asymmetric Namespace Access 00:28:17.520 =========================== 00:28:17.520 Change Count : 0 00:28:17.520 Number of ANA Group Descriptors : 1 00:28:17.520 ANA Group Descriptor : 0 00:28:17.520 ANA Group ID : 1 00:28:17.520 Number of NSID Values : 1 00:28:17.520 Change Count : 0 00:28:17.520 ANA State : 1 00:28:17.520 Namespace Identifier : 1 00:28:17.520 00:28:17.520 Commands Supported and Effects 00:28:17.520 ============================== 00:28:17.520 Admin Commands 00:28:17.520 -------------- 00:28:17.520 Get Log Page (02h): Supported 00:28:17.520 Identify (06h): Supported 00:28:17.520 Abort (08h): Supported 00:28:17.520 Set Features (09h): Supported 00:28:17.520 Get Features (0Ah): Supported 00:28:17.520 Asynchronous Event Request (0Ch): Supported 00:28:17.520 Keep Alive (18h): Supported 00:28:17.520 I/O Commands 00:28:17.520 ------------ 00:28:17.520 Flush (00h): Supported 00:28:17.520 Write (01h): Supported LBA-Change 00:28:17.520 Read (02h): Supported 00:28:17.520 Write Zeroes (08h): Supported LBA-Change 00:28:17.520 Dataset Management (09h): Supported 00:28:17.520 00:28:17.520 Error Log 00:28:17.520 ========= 00:28:17.520 Entry: 0 00:28:17.520 Error Count: 0x3 00:28:17.520 Submission Queue Id: 0x0 00:28:17.520 Command Id: 0x5 00:28:17.520 Phase Bit: 0 00:28:17.520 Status Code: 0x2 00:28:17.520 Status Code Type: 0x0 00:28:17.520 Do Not Retry: 1 00:28:17.520 Error Location: 0x28 00:28:17.520 LBA: 0x0 00:28:17.520 Namespace: 0x0 00:28:17.520 Vendor Log Page: 0x0 00:28:17.520 ----------- 00:28:17.520 Entry: 1 00:28:17.520 Error Count: 0x2 00:28:17.520 Submission Queue Id: 0x0 00:28:17.520 Command Id: 0x5 00:28:17.520 Phase Bit: 0 00:28:17.520 Status Code: 0x2 00:28:17.520 Status Code Type: 0x0 00:28:17.520 Do Not Retry: 1 00:28:17.520 Error Location: 0x28 00:28:17.520 LBA: 0x0 00:28:17.520 Namespace: 0x0 00:28:17.520 Vendor Log Page: 0x0 00:28:17.520 ----------- 00:28:17.520 Entry: 2 00:28:17.520 Error Count: 0x1 00:28:17.520 Submission Queue Id: 0x0 00:28:17.520 Command Id: 0x4 00:28:17.520 Phase Bit: 0 00:28:17.521 Status Code: 0x2 00:28:17.521 Status Code Type: 0x0 00:28:17.521 Do Not Retry: 1 00:28:17.521 Error Location: 0x28 00:28:17.521 LBA: 0x0 00:28:17.521 Namespace: 0x0 00:28:17.521 Vendor Log Page: 0x0 00:28:17.521 00:28:17.521 Number of Queues 00:28:17.521 ================ 00:28:17.521 Number of I/O Submission Queues: 128 00:28:17.521 Number of I/O Completion Queues: 128 00:28:17.521 00:28:17.521 ZNS Specific Controller Data 00:28:17.521 ============================ 00:28:17.521 Zone Append Size Limit: 0 00:28:17.521 00:28:17.521 00:28:17.521 Active Namespaces 00:28:17.521 ================= 00:28:17.521 get_feature(0x05) failed 00:28:17.521 Namespace ID:1 00:28:17.521 Command Set Identifier: NVM (00h) 00:28:17.521 Deallocate: Supported 00:28:17.521 Deallocated/Unwritten Error: Not Supported 00:28:17.521 Deallocated Read Value: Unknown 00:28:17.521 Deallocate in Write Zeroes: Not Supported 00:28:17.521 Deallocated Guard Field: 0xFFFF 00:28:17.521 Flush: Supported 00:28:17.521 Reservation: Not Supported 00:28:17.521 Namespace Sharing Capabilities: Multiple Controllers 00:28:17.521 Size (in LBAs): 3750748848 (1788GiB) 00:28:17.521 Capacity (in LBAs): 3750748848 (1788GiB) 00:28:17.521 Utilization (in LBAs): 3750748848 (1788GiB) 00:28:17.521 UUID: 7654ee23-bd59-41bb-bc9d-5e373cbbbfd1 00:28:17.521 Thin Provisioning: Not Supported 00:28:17.521 Per-NS Atomic Units: Yes 00:28:17.521 Atomic Write Unit (Normal): 8 00:28:17.521 Atomic Write Unit (PFail): 8 00:28:17.521 Preferred Write Granularity: 8 00:28:17.521 Atomic Compare & Write Unit: 8 00:28:17.521 Atomic Boundary Size (Normal): 0 00:28:17.521 Atomic Boundary Size (PFail): 0 00:28:17.521 Atomic Boundary Offset: 0 00:28:17.521 NGUID/EUI64 Never Reused: No 00:28:17.521 ANA group ID: 1 00:28:17.521 Namespace Write Protected: No 00:28:17.521 Number of LBA Formats: 1 00:28:17.521 Current LBA Format: LBA Format #00 00:28:17.521 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:17.521 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:17.521 rmmod nvme_tcp 00:28:17.521 rmmod nvme_fabrics 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:17.521 16:22:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.434 16:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:19.434 16:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:19.434 16:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:19.434 16:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:28:19.434 16:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:19.434 16:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:19.434 16:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:19.435 16:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:19.695 16:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:19.695 16:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:19.695 16:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:22.999 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:22.999 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:22.999 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:22.999 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:22.999 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:22.999 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:23.260 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:23.260 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:23.260 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:23.260 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:23.260 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:23.260 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:23.260 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:23.260 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:23.260 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:23.260 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:23.260 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:23.833 00:28:23.833 real 0m19.690s 00:28:23.833 user 0m5.298s 00:28:23.833 sys 0m11.396s 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:23.833 ************************************ 00:28:23.833 END TEST nvmf_identify_kernel_target 00:28:23.833 ************************************ 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.833 ************************************ 00:28:23.833 START TEST nvmf_auth_host 00:28:23.833 ************************************ 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:23.833 * Looking for test storage... 00:28:23.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:23.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.833 --rc genhtml_branch_coverage=1 00:28:23.833 --rc genhtml_function_coverage=1 00:28:23.833 --rc genhtml_legend=1 00:28:23.833 --rc geninfo_all_blocks=1 00:28:23.833 --rc geninfo_unexecuted_blocks=1 00:28:23.833 00:28:23.833 ' 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:23.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.833 --rc genhtml_branch_coverage=1 00:28:23.833 --rc genhtml_function_coverage=1 00:28:23.833 --rc genhtml_legend=1 00:28:23.833 --rc geninfo_all_blocks=1 00:28:23.833 --rc geninfo_unexecuted_blocks=1 00:28:23.833 00:28:23.833 ' 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:23.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.833 --rc genhtml_branch_coverage=1 00:28:23.833 --rc genhtml_function_coverage=1 00:28:23.833 --rc genhtml_legend=1 00:28:23.833 --rc geninfo_all_blocks=1 00:28:23.833 --rc geninfo_unexecuted_blocks=1 00:28:23.833 00:28:23.833 ' 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:23.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.833 --rc genhtml_branch_coverage=1 00:28:23.833 --rc genhtml_function_coverage=1 00:28:23.833 --rc genhtml_legend=1 00:28:23.833 --rc geninfo_all_blocks=1 00:28:23.833 --rc geninfo_unexecuted_blocks=1 00:28:23.833 00:28:23.833 ' 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:23.833 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:24.094 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:24.094 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:24.094 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:24.094 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:24.094 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:24.094 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:24.094 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:24.094 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:24.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:24.095 16:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:32.242 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:32.242 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:32.242 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:32.243 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:32.243 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:32.243 16:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:32.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:32.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.513 ms 00:28:32.243 00:28:32.243 --- 10.0.0.2 ping statistics --- 00:28:32.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.243 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:32.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:32.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:28:32.243 00:28:32.243 --- 10.0.0.1 ping statistics --- 00:28:32.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.243 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1437115 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1437115 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1437115 ']' 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.243 16:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.243 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:32.243 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:32.243 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:32.243 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:32.243 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0aeedf31201f5d5a35cf57eb03b06a3d 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.YFE 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0aeedf31201f5d5a35cf57eb03b06a3d 0 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0aeedf31201f5d5a35cf57eb03b06a3d 0 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0aeedf31201f5d5a35cf57eb03b06a3d 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.YFE 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.YFE 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.YFE 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=012ccd9a3f9dd2116b8af2f4bb73223198e719e1d9be1f6548cb54ddc17ec8fa 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.XIZ 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 012ccd9a3f9dd2116b8af2f4bb73223198e719e1d9be1f6548cb54ddc17ec8fa 3 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 012ccd9a3f9dd2116b8af2f4bb73223198e719e1d9be1f6548cb54ddc17ec8fa 3 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=012ccd9a3f9dd2116b8af2f4bb73223198e719e1d9be1f6548cb54ddc17ec8fa 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.XIZ 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.XIZ 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.XIZ 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cde13afebda6e460ebac1701ac0f4d13995097224200b63c 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.QFf 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cde13afebda6e460ebac1701ac0f4d13995097224200b63c 0 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cde13afebda6e460ebac1701ac0f4d13995097224200b63c 0 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cde13afebda6e460ebac1701ac0f4d13995097224200b63c 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.QFf 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.QFf 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.QFf 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:32.504 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:32.505 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:32.505 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5def861a7350b8d7b8c12c0b70a47844698422dd63592073 00:28:32.505 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:32.766 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.J7e 00:28:32.766 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5def861a7350b8d7b8c12c0b70a47844698422dd63592073 2 00:28:32.766 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5def861a7350b8d7b8c12c0b70a47844698422dd63592073 2 00:28:32.766 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:32.766 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:32.766 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5def861a7350b8d7b8c12c0b70a47844698422dd63592073 00:28:32.766 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:32.766 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:32.766 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.J7e 00:28:32.766 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.J7e 00:28:32.766 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.J7e 00:28:32.766 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:32.766 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:32.766 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:32.766 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:32.766 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:32.766 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:32.766 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=576cbfbea8ba471bbf547a989a944bb7 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.26Q 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 576cbfbea8ba471bbf547a989a944bb7 1 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 576cbfbea8ba471bbf547a989a944bb7 1 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=576cbfbea8ba471bbf547a989a944bb7 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.26Q 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.26Q 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.26Q 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=df352c5cf7753aef205024d3bfd84aed 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.kZR 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key df352c5cf7753aef205024d3bfd84aed 1 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 df352c5cf7753aef205024d3bfd84aed 1 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=df352c5cf7753aef205024d3bfd84aed 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.kZR 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.kZR 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.kZR 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=514bae08b9a0aa7fb785d1cf0a492b45827334b94888256f 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.LJC 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 514bae08b9a0aa7fb785d1cf0a492b45827334b94888256f 2 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 514bae08b9a0aa7fb785d1cf0a492b45827334b94888256f 2 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=514bae08b9a0aa7fb785d1cf0a492b45827334b94888256f 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.LJC 00:28:32.767 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.LJC 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.LJC 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7634f31efb00fe98a88f54a3a5e556b3 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.RBY 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7634f31efb00fe98a88f54a3a5e556b3 0 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7634f31efb00fe98a88f54a3a5e556b3 0 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7634f31efb00fe98a88f54a3a5e556b3 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.RBY 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.RBY 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.RBY 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d99ee7dbc47a65675ef861e1564b15c2f949b81632caf8925190b25bb10cfb61 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.aCD 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d99ee7dbc47a65675ef861e1564b15c2f949b81632caf8925190b25bb10cfb61 3 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d99ee7dbc47a65675ef861e1564b15c2f949b81632caf8925190b25bb10cfb61 3 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d99ee7dbc47a65675ef861e1564b15c2f949b81632caf8925190b25bb10cfb61 00:28:33.079 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:33.080 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:33.080 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.aCD 00:28:33.080 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.aCD 00:28:33.080 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.aCD 00:28:33.080 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:33.080 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1437115 00:28:33.080 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1437115 ']' 00:28:33.080 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.080 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.080 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.080 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.080 16:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.384 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.384 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:33.384 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:33.384 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.YFE 00:28:33.384 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.384 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.384 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.384 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.XIZ ]] 00:28:33.384 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XIZ 00:28:33.384 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.384 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.384 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.384 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:33.384 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.QFf 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.J7e ]] 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.J7e 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.26Q 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.kZR ]] 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kZR 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.LJC 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.RBY ]] 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.RBY 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.aCD 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.385 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:33.386 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:33.386 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:33.386 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:33.386 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:33.386 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:33.386 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:28:33.386 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:33.386 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:33.386 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:33.386 16:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:36.697 Waiting for block devices as requested 00:28:36.697 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:36.958 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:36.958 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:36.958 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:36.958 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:37.220 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:37.220 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:37.220 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:37.481 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:37.481 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:37.742 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:37.742 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:37.742 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:37.742 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:38.003 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:38.003 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:38.003 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:38.947 No valid GPT data, bailing 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:38.947 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:28:39.208 00:28:39.208 Discovery Log Number of Records 2, Generation counter 2 00:28:39.208 =====Discovery Log Entry 0====== 00:28:39.208 trtype: tcp 00:28:39.208 adrfam: ipv4 00:28:39.208 subtype: current discovery subsystem 00:28:39.208 treq: not specified, sq flow control disable supported 00:28:39.208 portid: 1 00:28:39.208 trsvcid: 4420 00:28:39.208 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:39.208 traddr: 10.0.0.1 00:28:39.208 eflags: none 00:28:39.208 sectype: none 00:28:39.208 =====Discovery Log Entry 1====== 00:28:39.208 trtype: tcp 00:28:39.208 adrfam: ipv4 00:28:39.209 subtype: nvme subsystem 00:28:39.209 treq: not specified, sq flow control disable supported 00:28:39.209 portid: 1 00:28:39.209 trsvcid: 4420 00:28:39.209 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:39.209 traddr: 10.0.0.1 00:28:39.209 eflags: none 00:28:39.209 sectype: none 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: ]] 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.209 16:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.209 nvme0n1 00:28:39.209 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.209 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.209 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.209 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.209 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.209 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.209 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.209 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.209 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.209 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.209 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.209 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:39.209 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:39.209 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.209 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:39.209 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.209 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:39.470 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:39.470 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:39.470 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:28:39.470 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:28:39.470 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:39.470 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:39.470 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:28:39.470 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: ]] 00:28:39.470 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:28:39.470 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:39.470 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.470 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:39.470 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:39.470 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:39.470 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.470 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:39.470 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.470 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.470 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.471 nvme0n1 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: ]] 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.471 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.733 nvme0n1 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: ]] 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.733 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.994 nvme0n1 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: ]] 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.994 16:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.256 nvme0n1 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.256 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.517 nvme0n1 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: ]] 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:28:40.517 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.518 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.779 nvme0n1 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: ]] 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.780 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.042 nvme0n1 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: ]] 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.042 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.043 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.043 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:41.043 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.043 16:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.304 nvme0n1 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: ]] 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.304 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.564 nvme0n1 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.565 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.826 nvme0n1 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: ]] 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.826 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.087 nvme0n1 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: ]] 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.087 16:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.347 nvme0n1 00:28:42.347 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.347 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.347 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.347 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.347 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.347 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.347 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.347 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.347 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.347 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: ]] 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.608 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.869 nvme0n1 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: ]] 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.869 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.130 nvme0n1 00:28:43.130 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.130 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.130 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.130 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.130 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.130 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.130 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.130 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.130 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.130 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.130 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.130 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.130 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:43.130 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.130 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.131 16:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.393 nvme0n1 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: ]] 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.393 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.655 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.655 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.655 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.655 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.655 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.655 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.655 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.655 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.655 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.655 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.655 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.656 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:43.656 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.656 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.916 nvme0n1 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: ]] 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.916 16:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.488 nvme0n1 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: ]] 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.488 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.061 nvme0n1 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: ]] 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.061 16:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.322 nvme0n1 00:28:45.322 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.322 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.322 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.322 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.322 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.583 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.584 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.845 nvme0n1 00:28:45.845 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.845 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.845 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.845 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.845 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.845 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: ]] 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.106 16:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.680 nvme0n1 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: ]] 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.680 16:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.622 nvme0n1 00:28:47.622 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.622 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.622 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.622 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.622 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.622 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.622 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.622 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.622 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.622 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.622 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.622 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.622 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:47.622 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.622 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:47.622 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:47.622 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:47.622 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:28:47.622 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:28:47.622 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:47.622 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: ]] 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.623 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.194 nvme0n1 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: ]] 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.194 16:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.194 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.194 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.194 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.194 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.194 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.194 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.194 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.194 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.194 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.195 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.195 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.195 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:48.195 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.195 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.766 nvme0n1 00:28:48.766 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.766 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.766 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.766 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.766 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.766 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.766 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.766 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.766 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.766 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.026 16:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.596 nvme0n1 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: ]] 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:49.596 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.597 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.857 nvme0n1 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: ]] 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:49.857 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.858 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.858 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.858 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.858 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.858 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.858 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.858 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.858 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.858 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.858 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.858 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.858 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.858 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.858 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:49.858 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.858 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.119 nvme0n1 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: ]] 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:50.119 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:50.120 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:50.120 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.120 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:50.120 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.120 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.120 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.120 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.120 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.120 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.120 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.120 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.120 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.120 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.120 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.120 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.120 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.120 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.120 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:50.120 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.120 16:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.381 nvme0n1 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: ]] 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.381 nvme0n1 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.381 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.642 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.643 nvme0n1 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.643 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: ]] 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.905 nvme0n1 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.905 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: ]] 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.166 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.167 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.167 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.167 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.167 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.167 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.167 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.167 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.167 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.167 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.167 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:51.167 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.167 16:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.167 nvme0n1 00:28:51.167 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.167 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.167 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.167 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.167 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.167 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: ]] 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.428 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.429 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.429 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.429 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.429 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.429 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.429 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.429 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:51.429 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.429 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.429 nvme0n1 00:28:51.429 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.429 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.429 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.429 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.429 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.429 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: ]] 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.690 nvme0n1 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.690 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.951 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.952 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.952 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.952 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.952 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.952 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.952 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.952 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.952 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.952 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.952 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.952 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.952 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:51.952 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.952 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.952 nvme0n1 00:28:51.952 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.952 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.952 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.952 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.952 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.952 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: ]] 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.212 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.213 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.213 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.213 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.213 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.213 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.213 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.213 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.213 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.213 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.213 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.213 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.213 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:52.213 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.213 16:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.474 nvme0n1 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: ]] 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.474 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.736 nvme0n1 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: ]] 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.736 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.997 nvme0n1 00:28:52.997 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.997 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.997 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.997 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.997 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.997 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.997 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.997 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.997 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.997 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.997 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.997 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.997 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:52.997 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.997 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:52.997 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:52.997 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:52.997 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:28:52.997 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:28:52.997 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: ]] 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.259 16:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.519 nvme0n1 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.519 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.780 nvme0n1 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: ]] 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.780 16:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.352 nvme0n1 00:28:54.352 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.352 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.352 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.352 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.352 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.352 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.352 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.352 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.352 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.352 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.352 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.352 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.352 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:54.352 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.352 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:54.352 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:54.352 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: ]] 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.353 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.613 nvme0n1 00:28:54.613 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.613 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.613 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.613 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.613 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.613 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.873 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.873 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.873 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: ]] 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.874 16:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.134 nvme0n1 00:28:55.134 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.134 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.134 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.134 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.134 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.134 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.396 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.396 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.396 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.396 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.396 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.396 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.396 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:55.396 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.396 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:55.396 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:55.396 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:55.396 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:28:55.396 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:28:55.396 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:55.396 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:55.396 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:28:55.396 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: ]] 00:28:55.396 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:28:55.396 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.397 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.658 nvme0n1 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:55.658 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.659 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.918 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.919 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.919 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.919 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.919 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.919 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.919 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.919 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.919 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.919 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.919 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.919 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.919 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:55.919 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.919 16:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.179 nvme0n1 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: ]] 00:28:56.179 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.180 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.122 nvme0n1 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: ]] 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.122 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.123 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.123 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.123 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.123 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.123 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.123 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:57.123 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.123 16:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.694 nvme0n1 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: ]] 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.694 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.695 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:57.695 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.695 16:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.265 nvme0n1 00:28:58.265 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.527 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.527 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.527 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.527 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: ]] 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.528 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.101 nvme0n1 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.101 16:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.101 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.101 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:59.101 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:59.101 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:59.101 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.101 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.101 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:59.101 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.101 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:59.101 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:59.101 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:59.101 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:59.101 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.101 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.045 nvme0n1 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: ]] 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.045 nvme0n1 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: ]] 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.045 16:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.307 nvme0n1 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: ]] 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.307 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.569 nvme0n1 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: ]] 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.569 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.570 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:00.570 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.570 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.831 nvme0n1 00:29:00.831 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.831 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.831 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.831 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.831 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.831 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.831 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.831 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.831 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.831 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.831 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.831 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.831 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:00.831 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.831 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:00.831 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:00.831 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:00.831 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.832 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.092 nvme0n1 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: ]] 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:01.092 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.093 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:01.093 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:01.093 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:01.093 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.093 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:01.093 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.093 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.093 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.093 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.093 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:01.093 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:01.093 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:01.093 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.093 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.093 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:01.093 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.093 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:01.093 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:01.093 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:01.093 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:01.093 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.093 16:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.354 nvme0n1 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: ]] 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.354 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.615 nvme0n1 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: ]] 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.615 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.876 nvme0n1 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: ]] 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.876 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.137 nvme0n1 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.137 16:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.398 nvme0n1 00:29:02.398 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.398 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.398 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.398 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.398 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.398 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.398 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.398 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.398 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.398 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.398 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.398 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:02.398 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.398 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:02.398 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.398 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:02.398 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:02.398 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:02.398 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:29:02.398 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:29:02.398 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:02.398 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: ]] 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.399 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.660 nvme0n1 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: ]] 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:02.660 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:02.661 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.661 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:02.661 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.661 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.661 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.661 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.661 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:02.661 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:02.661 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:02.661 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.661 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.661 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:02.661 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.661 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:02.661 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:02.661 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:02.661 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:02.661 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.661 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.921 nvme0n1 00:29:02.921 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.921 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.921 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.921 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.921 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.921 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: ]] 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.182 16:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.448 nvme0n1 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: ]] 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.448 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.449 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.449 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:03.449 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:03.449 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:03.449 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.449 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.449 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:03.449 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.449 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:03.449 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:03.449 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:03.449 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:03.449 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.449 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.710 nvme0n1 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.710 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:03.711 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.711 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.711 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.711 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.711 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:03.711 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:03.711 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:03.711 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.711 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.711 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:03.711 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.711 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:03.711 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:03.711 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:03.711 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:03.711 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.711 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.972 nvme0n1 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: ]] 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.972 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.233 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.233 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.233 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:04.233 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:04.233 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:04.233 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.233 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.233 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:04.233 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.233 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:04.233 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:04.233 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:04.233 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:04.233 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.233 16:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.495 nvme0n1 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: ]] 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.495 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.068 nvme0n1 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: ]] 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.068 16:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.642 nvme0n1 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: ]] 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.642 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.904 nvme0n1 00:29:05.904 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.904 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.904 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.904 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.904 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.904 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.904 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.904 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.904 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.904 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:06.166 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:06.167 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:06.167 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.167 16:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.428 nvme0n1 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFlZWRmMzEyMDFmNWQ1YTM1Y2Y1N2ViMDNiMDZhM2QMqhte: 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: ]] 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDEyY2NkOWEzZjlkZDIxMTZiOGFmMmY0YmI3MzIyMzE5OGU3MTllMWQ5YmUxZjY1NDhjYjU0ZGRjMTdlYzhmYWRso9E=: 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.428 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.370 nvme0n1 00:29:07.370 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.370 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.370 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.370 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.370 16:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.370 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.370 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.370 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.370 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.370 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.370 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.370 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.370 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:07.370 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.370 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:07.370 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:07.370 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:07.370 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:29:07.370 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:29:07.370 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:07.370 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: ]] 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.371 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.943 nvme0n1 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: ]] 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.943 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.944 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.944 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:07.944 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.944 16:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.515 nvme0n1 00:29:08.515 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.515 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.515 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.515 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.515 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTE0YmFlMDhiOWEwYWE3ZmI3ODVkMWNmMGE0OTJiNDU4MjczMzRiOTQ4ODgyNTZmXYRHGg==: 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: ]] 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzNGYzMWVmYjAwZmU5OGE4OGY1NGEzYTVlNTU2YjMsXuSR: 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.775 16:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.347 nvme0n1 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZWU3ZGJjNDdhNjU2NzVlZjg2MWUxNTY0YjE1YzJmOTQ5YjgxNjMyY2FmODkyNTE5MGIyNWJiMTBjZmI2MZEVcrk=: 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:09.347 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:09.348 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:09.348 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:09.348 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.348 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.288 nvme0n1 00:29:10.288 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.288 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.288 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.288 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.288 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.288 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.288 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.288 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.288 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.288 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.288 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.288 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:10.288 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.288 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: ]] 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.289 16:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.289 request: 00:29:10.289 { 00:29:10.289 "name": "nvme0", 00:29:10.289 "trtype": "tcp", 00:29:10.289 "traddr": "10.0.0.1", 00:29:10.289 "adrfam": "ipv4", 00:29:10.289 "trsvcid": "4420", 00:29:10.289 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:10.289 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:10.289 "prchk_reftag": false, 00:29:10.289 "prchk_guard": false, 00:29:10.289 "hdgst": false, 00:29:10.289 "ddgst": false, 00:29:10.289 "allow_unrecognized_csi": false, 00:29:10.289 "method": "bdev_nvme_attach_controller", 00:29:10.289 "req_id": 1 00:29:10.289 } 00:29:10.289 Got JSON-RPC error response 00:29:10.289 response: 00:29:10.289 { 00:29:10.289 "code": -5, 00:29:10.289 "message": "Input/output error" 00:29:10.289 } 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.289 request: 00:29:10.289 { 00:29:10.289 "name": "nvme0", 00:29:10.289 "trtype": "tcp", 00:29:10.289 "traddr": "10.0.0.1", 00:29:10.289 "adrfam": "ipv4", 00:29:10.289 "trsvcid": "4420", 00:29:10.289 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:10.289 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:10.289 "prchk_reftag": false, 00:29:10.289 "prchk_guard": false, 00:29:10.289 "hdgst": false, 00:29:10.289 "ddgst": false, 00:29:10.289 "dhchap_key": "key2", 00:29:10.289 "allow_unrecognized_csi": false, 00:29:10.289 "method": "bdev_nvme_attach_controller", 00:29:10.289 "req_id": 1 00:29:10.289 } 00:29:10.289 Got JSON-RPC error response 00:29:10.289 response: 00:29:10.289 { 00:29:10.289 "code": -5, 00:29:10.289 "message": "Input/output error" 00:29:10.289 } 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:10.289 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:10.290 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.290 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.290 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.290 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.290 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.290 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.290 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.290 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.290 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.290 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.290 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:10.290 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:10.290 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:10.290 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.549 request: 00:29:10.549 { 00:29:10.549 "name": "nvme0", 00:29:10.549 "trtype": "tcp", 00:29:10.549 "traddr": "10.0.0.1", 00:29:10.549 "adrfam": "ipv4", 00:29:10.549 "trsvcid": "4420", 00:29:10.549 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:10.549 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:10.549 "prchk_reftag": false, 00:29:10.549 "prchk_guard": false, 00:29:10.549 "hdgst": false, 00:29:10.549 "ddgst": false, 00:29:10.549 "dhchap_key": "key1", 00:29:10.549 "dhchap_ctrlr_key": "ckey2", 00:29:10.549 "allow_unrecognized_csi": false, 00:29:10.549 "method": "bdev_nvme_attach_controller", 00:29:10.549 "req_id": 1 00:29:10.549 } 00:29:10.549 Got JSON-RPC error response 00:29:10.549 response: 00:29:10.549 { 00:29:10.549 "code": -5, 00:29:10.549 "message": "Input/output error" 00:29:10.549 } 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.549 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.549 nvme0n1 00:29:10.550 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.550 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:10.550 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.550 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:10.550 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:10.550 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:10.550 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:29:10.550 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:29:10.550 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:10.550 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:10.550 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:29:10.550 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: ]] 00:29:10.550 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:29:10.550 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:10.550 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.550 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.809 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.809 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.809 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:29:10.809 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.809 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.809 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.809 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.809 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:10.809 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:10.809 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:10.809 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:10.809 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:10.809 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:10.809 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:10.809 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:10.809 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.809 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.809 request: 00:29:10.809 { 00:29:10.809 "name": "nvme0", 00:29:10.809 "dhchap_key": "key1", 00:29:10.809 "dhchap_ctrlr_key": "ckey2", 00:29:10.809 "method": "bdev_nvme_set_keys", 00:29:10.809 "req_id": 1 00:29:10.809 } 00:29:10.809 Got JSON-RPC error response 00:29:10.809 response: 00:29:10.809 { 00:29:10.809 "code": -13, 00:29:10.809 "message": "Permission denied" 00:29:10.809 } 00:29:10.809 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:10.809 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:10.809 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:10.809 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:10.809 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:10.810 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.810 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:10.810 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.810 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.810 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.810 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:10.810 16:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:12.193 16:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.193 16:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:12.193 16:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.193 16:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.193 16:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.193 16:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:12.193 16:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RlMTNhZmViZGE2ZTQ2MGViYWMxNzAxYWMwZjRkMTM5OTUwOTcyMjQyMDBiNjNjZylLdw==: 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: ]] 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWRlZjg2MWE3MzUwYjhkN2I4YzEyYzBiNzBhNDc4NDQ2OTg0MjJkZDYzNTkyMDczoYGpMA==: 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.228 nvme0n1 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTc2Y2JmYmVhOGJhNDcxYmJmNTQ3YTk4OWE5NDRiYjd6sq50: 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: ]] 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzNTJjNWNmNzc1M2FlZjIwNTAyNGQzYmZkODRhZWQ5ed/6: 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.228 16:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.228 request: 00:29:13.228 { 00:29:13.228 "name": "nvme0", 00:29:13.228 "dhchap_key": "key2", 00:29:13.228 "dhchap_ctrlr_key": "ckey1", 00:29:13.228 "method": "bdev_nvme_set_keys", 00:29:13.228 "req_id": 1 00:29:13.228 } 00:29:13.228 Got JSON-RPC error response 00:29:13.228 response: 00:29:13.228 { 00:29:13.228 "code": -13, 00:29:13.228 "message": "Permission denied" 00:29:13.228 } 00:29:13.228 16:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:13.228 16:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:13.228 16:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:13.228 16:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:13.228 16:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:13.228 16:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.228 16:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:13.228 16:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.228 16:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.228 16:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.228 16:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:29:13.228 16:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:29:14.170 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.170 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:14.170 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.170 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.170 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:14.431 rmmod nvme_tcp 00:29:14.431 rmmod nvme_fabrics 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1437115 ']' 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1437115 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1437115 ']' 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1437115 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1437115 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1437115' 00:29:14.431 killing process with pid 1437115 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1437115 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1437115 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.431 16:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.979 16:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:16.979 16:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:16.979 16:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:16.979 16:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:16.979 16:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:16.979 16:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:29:16.980 16:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:16.980 16:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:16.980 16:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:16.980 16:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:16.980 16:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:16.980 16:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:16.980 16:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:20.284 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:20.284 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:20.284 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:20.284 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:20.284 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:20.284 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:20.284 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:20.284 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:20.284 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:20.284 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:20.284 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:20.284 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:20.284 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:20.284 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:20.284 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:20.284 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:20.284 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:20.856 16:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.YFE /tmp/spdk.key-null.QFf /tmp/spdk.key-sha256.26Q /tmp/spdk.key-sha384.LJC /tmp/spdk.key-sha512.aCD /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:20.856 16:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:24.159 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:24.160 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:24.160 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:24.160 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:24.160 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:24.160 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:24.160 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:24.160 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:24.160 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:24.160 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:29:24.160 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:24.160 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:24.160 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:24.160 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:24.160 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:24.160 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:24.160 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:24.732 00:29:24.732 real 1m0.836s 00:29:24.732 user 0m54.488s 00:29:24.732 sys 0m16.239s 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.732 ************************************ 00:29:24.732 END TEST nvmf_auth_host 00:29:24.732 ************************************ 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.732 ************************************ 00:29:24.732 START TEST nvmf_digest 00:29:24.732 ************************************ 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:24.732 * Looking for test storage... 00:29:24.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:29:24.732 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:24.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.995 --rc genhtml_branch_coverage=1 00:29:24.995 --rc genhtml_function_coverage=1 00:29:24.995 --rc genhtml_legend=1 00:29:24.995 --rc geninfo_all_blocks=1 00:29:24.995 --rc geninfo_unexecuted_blocks=1 00:29:24.995 00:29:24.995 ' 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:24.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.995 --rc genhtml_branch_coverage=1 00:29:24.995 --rc genhtml_function_coverage=1 00:29:24.995 --rc genhtml_legend=1 00:29:24.995 --rc geninfo_all_blocks=1 00:29:24.995 --rc geninfo_unexecuted_blocks=1 00:29:24.995 00:29:24.995 ' 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:24.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.995 --rc genhtml_branch_coverage=1 00:29:24.995 --rc genhtml_function_coverage=1 00:29:24.995 --rc genhtml_legend=1 00:29:24.995 --rc geninfo_all_blocks=1 00:29:24.995 --rc geninfo_unexecuted_blocks=1 00:29:24.995 00:29:24.995 ' 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:24.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.995 --rc genhtml_branch_coverage=1 00:29:24.995 --rc genhtml_function_coverage=1 00:29:24.995 --rc genhtml_legend=1 00:29:24.995 --rc geninfo_all_blocks=1 00:29:24.995 --rc geninfo_unexecuted_blocks=1 00:29:24.995 00:29:24.995 ' 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.995 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.996 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.996 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:24.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:24.996 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:24.996 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:24.996 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:24.996 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:24.996 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:24.996 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:24.996 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:24.996 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:24.996 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:24.996 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.996 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:24.996 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:24.996 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:24.996 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.996 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.996 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.996 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:24.996 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:24.996 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:29:24.996 16:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:33.138 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:33.138 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:33.138 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:33.138 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:33.138 16:24:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:33.138 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.138 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.138 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:33.138 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:33.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:29:33.139 00:29:33.139 --- 10.0.0.2 ping statistics --- 00:29:33.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.139 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:33.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:29:33.139 00:29:33.139 --- 10.0.0.1 ping statistics --- 00:29:33.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.139 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:33.139 ************************************ 00:29:33.139 START TEST nvmf_digest_clean 00:29:33.139 ************************************ 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1454218 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1454218 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1454218 ']' 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:33.139 16:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:33.139 [2024-11-20 16:24:08.395428] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:29:33.139 [2024-11-20 16:24:08.395488] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.139 [2024-11-20 16:24:08.495989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.139 [2024-11-20 16:24:08.547370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.139 [2024-11-20 16:24:08.547421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.139 [2024-11-20 16:24:08.547430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.139 [2024-11-20 16:24:08.547438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.139 [2024-11-20 16:24:08.547444] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.139 [2024-11-20 16:24:08.548257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.408 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.408 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:33.408 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:33.408 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:33.408 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:33.408 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:33.408 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:33.408 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:33.408 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:33.409 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.409 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:33.409 null0 00:29:33.675 [2024-11-20 16:24:09.344127] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.675 [2024-11-20 16:24:09.368453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.675 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.675 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:33.675 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:33.675 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:33.675 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:33.675 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:33.675 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:33.675 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:33.675 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1454650 00:29:33.675 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1454650 /var/tmp/bperf.sock 00:29:33.675 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1454650 ']' 00:29:33.675 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:33.675 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:33.675 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:33.675 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:33.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:33.675 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:33.675 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:33.675 [2024-11-20 16:24:09.429604] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:29:33.675 [2024-11-20 16:24:09.429667] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1454650 ] 00:29:33.675 [2024-11-20 16:24:09.493954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.675 [2024-11-20 16:24:09.543478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.936 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.936 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:33.936 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:33.936 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:33.936 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:34.196 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:34.196 16:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:34.457 nvme0n1 00:29:34.457 16:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:34.457 16:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:34.457 Running I/O for 2 seconds... 00:29:36.785 19791.00 IOPS, 77.31 MiB/s [2024-11-20T15:24:12.721Z] 19872.00 IOPS, 77.62 MiB/s 00:29:36.785 Latency(us) 00:29:36.785 [2024-11-20T15:24:12.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.785 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:36.785 nvme0n1 : 2.00 19904.25 77.75 0.00 0.00 6425.50 2293.76 14745.60 00:29:36.785 [2024-11-20T15:24:12.721Z] =================================================================================================================== 00:29:36.785 [2024-11-20T15:24:12.721Z] Total : 19904.25 77.75 0.00 0.00 6425.50 2293.76 14745.60 00:29:36.785 { 00:29:36.785 "results": [ 00:29:36.785 { 00:29:36.785 "job": "nvme0n1", 00:29:36.785 "core_mask": "0x2", 00:29:36.785 "workload": "randread", 00:29:36.785 "status": "finished", 00:29:36.785 "queue_depth": 128, 00:29:36.785 "io_size": 4096, 00:29:36.785 "runtime": 2.00319, 00:29:36.785 "iops": 19904.25271691652, 00:29:36.785 "mibps": 77.75098717545515, 00:29:36.785 "io_failed": 0, 00:29:36.785 "io_timeout": 0, 00:29:36.785 "avg_latency_us": 6425.497956126271, 00:29:36.785 "min_latency_us": 2293.76, 00:29:36.785 "max_latency_us": 14745.6 00:29:36.785 } 00:29:36.785 ], 00:29:36.785 "core_count": 1 00:29:36.785 } 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:36.785 | select(.opcode=="crc32c") 00:29:36.785 | "\(.module_name) \(.executed)"' 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1454650 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1454650 ']' 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1454650 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1454650 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1454650' 00:29:36.785 killing process with pid 1454650 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1454650 00:29:36.785 Received shutdown signal, test time was about 2.000000 seconds 00:29:36.785 00:29:36.785 Latency(us) 00:29:36.785 [2024-11-20T15:24:12.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.785 [2024-11-20T15:24:12.721Z] =================================================================================================================== 00:29:36.785 [2024-11-20T15:24:12.721Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1454650 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1455487 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1455487 /var/tmp/bperf.sock 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1455487 ']' 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:36.785 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:36.786 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:36.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:36.786 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:36.786 16:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:37.047 [2024-11-20 16:24:12.763148] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:29:37.047 [2024-11-20 16:24:12.763238] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1455487 ] 00:29:37.047 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:37.047 Zero copy mechanism will not be used. 00:29:37.047 [2024-11-20 16:24:12.854952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.047 [2024-11-20 16:24:12.894027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.988 16:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:37.988 16:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:37.988 16:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:37.988 16:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:37.988 16:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:37.988 16:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:37.988 16:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:38.248 nvme0n1 00:29:38.248 16:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:38.248 16:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:38.248 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:38.248 Zero copy mechanism will not be used. 00:29:38.248 Running I/O for 2 seconds... 00:29:40.571 3308.00 IOPS, 413.50 MiB/s [2024-11-20T15:24:16.507Z] 3718.50 IOPS, 464.81 MiB/s 00:29:40.571 Latency(us) 00:29:40.571 [2024-11-20T15:24:16.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.571 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:40.571 nvme0n1 : 2.00 3723.75 465.47 0.00 0.00 4293.83 604.16 11687.25 00:29:40.571 [2024-11-20T15:24:16.507Z] =================================================================================================================== 00:29:40.571 [2024-11-20T15:24:16.507Z] Total : 3723.75 465.47 0.00 0.00 4293.83 604.16 11687.25 00:29:40.571 { 00:29:40.571 "results": [ 00:29:40.571 { 00:29:40.571 "job": "nvme0n1", 00:29:40.571 "core_mask": "0x2", 00:29:40.571 "workload": "randread", 00:29:40.571 "status": "finished", 00:29:40.571 "queue_depth": 16, 00:29:40.571 "io_size": 131072, 00:29:40.571 "runtime": 2.001479, 00:29:40.571 "iops": 3723.7462896188267, 00:29:40.571 "mibps": 465.46828620235334, 00:29:40.571 "io_failed": 0, 00:29:40.571 "io_timeout": 0, 00:29:40.571 "avg_latency_us": 4293.830901203095, 00:29:40.571 "min_latency_us": 604.16, 00:29:40.571 "max_latency_us": 11687.253333333334 00:29:40.571 } 00:29:40.571 ], 00:29:40.571 "core_count": 1 00:29:40.571 } 00:29:40.571 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:40.571 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:40.571 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:40.571 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:40.571 | select(.opcode=="crc32c") 00:29:40.571 | "\(.module_name) \(.executed)"' 00:29:40.571 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:40.571 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:40.571 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:40.571 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:40.571 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:40.571 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1455487 00:29:40.571 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1455487 ']' 00:29:40.571 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1455487 00:29:40.571 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:40.571 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:40.571 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1455487 00:29:40.571 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:40.571 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:40.571 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1455487' 00:29:40.571 killing process with pid 1455487 00:29:40.571 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1455487 00:29:40.571 Received shutdown signal, test time was about 2.000000 seconds 00:29:40.571 00:29:40.571 Latency(us) 00:29:40.571 [2024-11-20T15:24:16.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.571 [2024-11-20T15:24:16.507Z] =================================================================================================================== 00:29:40.571 [2024-11-20T15:24:16.507Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:40.571 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1455487 00:29:40.832 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:40.832 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:40.832 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:40.832 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:40.832 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:40.832 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:40.832 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:40.832 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1456308 00:29:40.832 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1456308 /var/tmp/bperf.sock 00:29:40.832 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1456308 ']' 00:29:40.832 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:40.832 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:40.832 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:40.832 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:40.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:40.832 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:40.832 16:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:40.832 [2024-11-20 16:24:16.607037] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:29:40.832 [2024-11-20 16:24:16.607096] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1456308 ] 00:29:40.832 [2024-11-20 16:24:16.688541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.832 [2024-11-20 16:24:16.718143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.771 16:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.771 16:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:41.771 16:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:41.771 16:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:41.771 16:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:41.771 16:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:41.771 16:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:42.032 nvme0n1 00:29:42.292 16:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:42.292 16:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:42.292 Running I/O for 2 seconds... 00:29:44.176 30541.00 IOPS, 119.30 MiB/s [2024-11-20T15:24:20.112Z] 30548.00 IOPS, 119.33 MiB/s 00:29:44.176 Latency(us) 00:29:44.176 [2024-11-20T15:24:20.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.176 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:44.176 nvme0n1 : 2.00 30571.94 119.42 0.00 0.00 4182.72 1727.15 8028.16 00:29:44.176 [2024-11-20T15:24:20.112Z] =================================================================================================================== 00:29:44.176 [2024-11-20T15:24:20.112Z] Total : 30571.94 119.42 0.00 0.00 4182.72 1727.15 8028.16 00:29:44.176 { 00:29:44.176 "results": [ 00:29:44.176 { 00:29:44.176 "job": "nvme0n1", 00:29:44.176 "core_mask": "0x2", 00:29:44.176 "workload": "randwrite", 00:29:44.176 "status": "finished", 00:29:44.176 "queue_depth": 128, 00:29:44.176 "io_size": 4096, 00:29:44.176 "runtime": 2.002621, 00:29:44.176 "iops": 30571.93547855535, 00:29:44.176 "mibps": 119.42162296310684, 00:29:44.176 "io_failed": 0, 00:29:44.176 "io_timeout": 0, 00:29:44.176 "avg_latency_us": 4182.718982534083, 00:29:44.176 "min_latency_us": 1727.1466666666668, 00:29:44.176 "max_latency_us": 8028.16 00:29:44.176 } 00:29:44.176 ], 00:29:44.176 "core_count": 1 00:29:44.176 } 00:29:44.176 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:44.176 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:44.176 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:44.176 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:44.176 | select(.opcode=="crc32c") 00:29:44.176 | "\(.module_name) \(.executed)"' 00:29:44.176 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:44.437 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:44.437 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:44.437 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:44.437 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:44.437 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1456308 00:29:44.437 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1456308 ']' 00:29:44.437 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1456308 00:29:44.437 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:44.437 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:44.437 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1456308 00:29:44.437 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:44.437 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:44.437 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1456308' 00:29:44.437 killing process with pid 1456308 00:29:44.437 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1456308 00:29:44.437 Received shutdown signal, test time was about 2.000000 seconds 00:29:44.437 00:29:44.437 Latency(us) 00:29:44.437 [2024-11-20T15:24:20.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.437 [2024-11-20T15:24:20.373Z] =================================================================================================================== 00:29:44.437 [2024-11-20T15:24:20.373Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:44.437 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1456308 00:29:44.699 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:44.699 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:44.699 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:44.699 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:44.699 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:44.699 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:44.699 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:44.699 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1457075 00:29:44.699 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1457075 /var/tmp/bperf.sock 00:29:44.699 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1457075 ']' 00:29:44.699 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:44.699 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:44.699 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:44.699 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:44.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:44.699 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:44.699 16:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:44.699 [2024-11-20 16:24:20.483992] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:29:44.699 [2024-11-20 16:24:20.484048] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1457075 ] 00:29:44.699 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:44.699 Zero copy mechanism will not be used. 00:29:44.699 [2024-11-20 16:24:20.565723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.699 [2024-11-20 16:24:20.595779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.641 16:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:45.641 16:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:45.641 16:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:45.641 16:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:45.641 16:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:45.641 16:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:45.641 16:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:45.902 nvme0n1 00:29:45.902 16:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:45.902 16:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:46.163 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:46.163 Zero copy mechanism will not be used. 00:29:46.163 Running I/O for 2 seconds... 00:29:48.048 7228.00 IOPS, 903.50 MiB/s [2024-11-20T15:24:23.984Z] 7346.50 IOPS, 918.31 MiB/s 00:29:48.048 Latency(us) 00:29:48.048 [2024-11-20T15:24:23.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.048 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:48.048 nvme0n1 : 2.01 7336.95 917.12 0.00 0.00 2176.45 1338.03 11523.41 00:29:48.048 [2024-11-20T15:24:23.984Z] =================================================================================================================== 00:29:48.048 [2024-11-20T15:24:23.984Z] Total : 7336.95 917.12 0.00 0.00 2176.45 1338.03 11523.41 00:29:48.048 { 00:29:48.048 "results": [ 00:29:48.048 { 00:29:48.048 "job": "nvme0n1", 00:29:48.048 "core_mask": "0x2", 00:29:48.048 "workload": "randwrite", 00:29:48.048 "status": "finished", 00:29:48.048 "queue_depth": 16, 00:29:48.048 "io_size": 131072, 00:29:48.048 "runtime": 2.00533, 00:29:48.048 "iops": 7336.947036148664, 00:29:48.048 "mibps": 917.118379518583, 00:29:48.048 "io_failed": 0, 00:29:48.048 "io_timeout": 0, 00:29:48.048 "avg_latency_us": 2176.4476331588844, 00:29:48.048 "min_latency_us": 1338.0266666666666, 00:29:48.048 "max_latency_us": 11523.413333333334 00:29:48.048 } 00:29:48.048 ], 00:29:48.048 "core_count": 1 00:29:48.048 } 00:29:48.048 16:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:48.048 16:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:48.048 16:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:48.048 16:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:48.048 | select(.opcode=="crc32c") 00:29:48.048 | "\(.module_name) \(.executed)"' 00:29:48.048 16:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:48.308 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:48.308 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:48.308 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:48.308 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:48.308 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1457075 00:29:48.308 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1457075 ']' 00:29:48.308 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1457075 00:29:48.308 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:48.308 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:48.308 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1457075 00:29:48.308 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:48.308 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:48.308 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1457075' 00:29:48.308 killing process with pid 1457075 00:29:48.308 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1457075 00:29:48.308 Received shutdown signal, test time was about 2.000000 seconds 00:29:48.308 00:29:48.308 Latency(us) 00:29:48.308 [2024-11-20T15:24:24.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.308 [2024-11-20T15:24:24.244Z] =================================================================================================================== 00:29:48.308 [2024-11-20T15:24:24.245Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:48.309 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1457075 00:29:48.569 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1454218 00:29:48.569 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1454218 ']' 00:29:48.569 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1454218 00:29:48.569 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:48.569 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:48.569 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1454218 00:29:48.569 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:48.569 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:48.569 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1454218' 00:29:48.569 killing process with pid 1454218 00:29:48.569 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1454218 00:29:48.569 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1454218 00:29:48.569 00:29:48.569 real 0m16.153s 00:29:48.569 user 0m31.817s 00:29:48.569 sys 0m3.728s 00:29:48.569 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:48.569 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:48.569 ************************************ 00:29:48.569 END TEST nvmf_digest_clean 00:29:48.569 ************************************ 00:29:48.831 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:48.831 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:48.831 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:48.831 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:48.831 ************************************ 00:29:48.831 START TEST nvmf_digest_error 00:29:48.831 ************************************ 00:29:48.831 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:29:48.831 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:48.831 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:48.831 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:48.831 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:48.831 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1457786 00:29:48.831 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1457786 00:29:48.831 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:48.831 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1457786 ']' 00:29:48.831 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.831 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.831 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.831 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.831 16:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:48.831 [2024-11-20 16:24:24.615108] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:29:48.831 [2024-11-20 16:24:24.615166] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.831 [2024-11-20 16:24:24.707631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.832 [2024-11-20 16:24:24.738557] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.832 [2024-11-20 16:24:24.738585] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.832 [2024-11-20 16:24:24.738591] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.832 [2024-11-20 16:24:24.738595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.832 [2024-11-20 16:24:24.738600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.832 [2024-11-20 16:24:24.739042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:49.773 [2024-11-20 16:24:25.448975] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:49.773 null0 00:29:49.773 [2024-11-20 16:24:25.526953] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:49.773 [2024-11-20 16:24:25.551167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1458134 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1458134 /var/tmp/bperf.sock 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1458134 ']' 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:49.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:49.773 16:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:49.773 [2024-11-20 16:24:25.608725] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:29:49.773 [2024-11-20 16:24:25.608772] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458134 ] 00:29:49.773 [2024-11-20 16:24:25.690566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.034 [2024-11-20 16:24:25.720226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.605 16:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:50.605 16:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:50.605 16:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:50.605 16:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:50.866 16:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:50.866 16:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.866 16:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:50.866 16:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.866 16:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:50.866 16:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:51.128 nvme0n1 00:29:51.128 16:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:51.128 16:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.128 16:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:51.128 16:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.128 16:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:51.128 16:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:51.128 Running I/O for 2 seconds... 00:29:51.128 [2024-11-20 16:24:26.964681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.128 [2024-11-20 16:24:26.964717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.128 [2024-11-20 16:24:26.964726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.128 [2024-11-20 16:24:26.975001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.128 [2024-11-20 16:24:26.975021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.128 [2024-11-20 16:24:26.975028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.128 [2024-11-20 16:24:26.983847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.128 [2024-11-20 16:24:26.983865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.128 [2024-11-20 16:24:26.983872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.128 [2024-11-20 16:24:26.993097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.128 [2024-11-20 16:24:26.993117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.128 [2024-11-20 16:24:26.993124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.128 [2024-11-20 16:24:27.003141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.128 [2024-11-20 16:24:27.003165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.128 [2024-11-20 16:24:27.003172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.128 [2024-11-20 16:24:27.012305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.128 [2024-11-20 16:24:27.012323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.128 [2024-11-20 16:24:27.012330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.128 [2024-11-20 16:24:27.020734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.128 [2024-11-20 16:24:27.020752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.128 [2024-11-20 16:24:27.020758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.128 [2024-11-20 16:24:27.029804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.128 [2024-11-20 16:24:27.029822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.128 [2024-11-20 16:24:27.029828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.128 [2024-11-20 16:24:27.038965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.128 [2024-11-20 16:24:27.038983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.128 [2024-11-20 16:24:27.038989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.128 [2024-11-20 16:24:27.048079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.128 [2024-11-20 16:24:27.048097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.128 [2024-11-20 16:24:27.048104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.128 [2024-11-20 16:24:27.057455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.128 [2024-11-20 16:24:27.057472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.128 [2024-11-20 16:24:27.057479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.390 [2024-11-20 16:24:27.066576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.390 [2024-11-20 16:24:27.066593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.390 [2024-11-20 16:24:27.066600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.390 [2024-11-20 16:24:27.074213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.390 [2024-11-20 16:24:27.074230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.390 [2024-11-20 16:24:27.074237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.390 [2024-11-20 16:24:27.084892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.390 [2024-11-20 16:24:27.084909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.390 [2024-11-20 16:24:27.084916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.390 [2024-11-20 16:24:27.093783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.390 [2024-11-20 16:24:27.093800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.390 [2024-11-20 16:24:27.093806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.390 [2024-11-20 16:24:27.101800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.390 [2024-11-20 16:24:27.101817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.390 [2024-11-20 16:24:27.101824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.390 [2024-11-20 16:24:27.110573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.390 [2024-11-20 16:24:27.110591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.390 [2024-11-20 16:24:27.110597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.390 [2024-11-20 16:24:27.119370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.390 [2024-11-20 16:24:27.119388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.390 [2024-11-20 16:24:27.119397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.390 [2024-11-20 16:24:27.129581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.390 [2024-11-20 16:24:27.129598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.390 [2024-11-20 16:24:27.129605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.390 [2024-11-20 16:24:27.138133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.390 [2024-11-20 16:24:27.138150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.390 [2024-11-20 16:24:27.138156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.390 [2024-11-20 16:24:27.146642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.390 [2024-11-20 16:24:27.146659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.390 [2024-11-20 16:24:27.146666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.390 [2024-11-20 16:24:27.155226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.390 [2024-11-20 16:24:27.155243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.390 [2024-11-20 16:24:27.155249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.390 [2024-11-20 16:24:27.165301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.390 [2024-11-20 16:24:27.165318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.390 [2024-11-20 16:24:27.165325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.390 [2024-11-20 16:24:27.174383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.390 [2024-11-20 16:24:27.174400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.390 [2024-11-20 16:24:27.174407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.390 [2024-11-20 16:24:27.183427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.390 [2024-11-20 16:24:27.183444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.390 [2024-11-20 16:24:27.183451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.390 [2024-11-20 16:24:27.190993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.390 [2024-11-20 16:24:27.191010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.390 [2024-11-20 16:24:27.191017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.390 [2024-11-20 16:24:27.202657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.390 [2024-11-20 16:24:27.202678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.390 [2024-11-20 16:24:27.202684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.390 [2024-11-20 16:24:27.211377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.390 [2024-11-20 16:24:27.211394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.391 [2024-11-20 16:24:27.211400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.391 [2024-11-20 16:24:27.219756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.391 [2024-11-20 16:24:27.219773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.391 [2024-11-20 16:24:27.219780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.391 [2024-11-20 16:24:27.228885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.391 [2024-11-20 16:24:27.228903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.391 [2024-11-20 16:24:27.228910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.391 [2024-11-20 16:24:27.239659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.391 [2024-11-20 16:24:27.239677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.391 [2024-11-20 16:24:27.239684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.391 [2024-11-20 16:24:27.249390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.391 [2024-11-20 16:24:27.249407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.391 [2024-11-20 16:24:27.249414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.391 [2024-11-20 16:24:27.258463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.391 [2024-11-20 16:24:27.258480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.391 [2024-11-20 16:24:27.258488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.391 [2024-11-20 16:24:27.267197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.391 [2024-11-20 16:24:27.267214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.391 [2024-11-20 16:24:27.267220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.391 [2024-11-20 16:24:27.276648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.391 [2024-11-20 16:24:27.276665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.391 [2024-11-20 16:24:27.276672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.391 [2024-11-20 16:24:27.284012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.391 [2024-11-20 16:24:27.284029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.391 [2024-11-20 16:24:27.284036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.391 [2024-11-20 16:24:27.294474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.391 [2024-11-20 16:24:27.294491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.391 [2024-11-20 16:24:27.294498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.391 [2024-11-20 16:24:27.303505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.391 [2024-11-20 16:24:27.303522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.391 [2024-11-20 16:24:27.303528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.391 [2024-11-20 16:24:27.311997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.391 [2024-11-20 16:24:27.312014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.391 [2024-11-20 16:24:27.312020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.391 [2024-11-20 16:24:27.321972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.391 [2024-11-20 16:24:27.321989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.391 [2024-11-20 16:24:27.321995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.653 [2024-11-20 16:24:27.330791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.653 [2024-11-20 16:24:27.330808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.653 [2024-11-20 16:24:27.330814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.653 [2024-11-20 16:24:27.340036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.653 [2024-11-20 16:24:27.340053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.653 [2024-11-20 16:24:27.340059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.653 [2024-11-20 16:24:27.350282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.653 [2024-11-20 16:24:27.350299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.653 [2024-11-20 16:24:27.350306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.653 [2024-11-20 16:24:27.358382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.653 [2024-11-20 16:24:27.358399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.653 [2024-11-20 16:24:27.358408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.653 [2024-11-20 16:24:27.368700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.653 [2024-11-20 16:24:27.368717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.653 [2024-11-20 16:24:27.368723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.653 [2024-11-20 16:24:27.380193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.653 [2024-11-20 16:24:27.380210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.653 [2024-11-20 16:24:27.380216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.653 [2024-11-20 16:24:27.390376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.653 [2024-11-20 16:24:27.390392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.653 [2024-11-20 16:24:27.390399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.653 [2024-11-20 16:24:27.398718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.653 [2024-11-20 16:24:27.398734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.653 [2024-11-20 16:24:27.398741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.653 [2024-11-20 16:24:27.407499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.653 [2024-11-20 16:24:27.407516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.653 [2024-11-20 16:24:27.407522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.653 [2024-11-20 16:24:27.416112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.653 [2024-11-20 16:24:27.416128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.653 [2024-11-20 16:24:27.416134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.653 [2024-11-20 16:24:27.425009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.653 [2024-11-20 16:24:27.425025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.653 [2024-11-20 16:24:27.425031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.653 [2024-11-20 16:24:27.433075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.653 [2024-11-20 16:24:27.433092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.653 [2024-11-20 16:24:27.433098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.653 [2024-11-20 16:24:27.443261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.653 [2024-11-20 16:24:27.443278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.653 [2024-11-20 16:24:27.443284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.653 [2024-11-20 16:24:27.452216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.653 [2024-11-20 16:24:27.452232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.653 [2024-11-20 16:24:27.452239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.653 [2024-11-20 16:24:27.461797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.653 [2024-11-20 16:24:27.461813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.653 [2024-11-20 16:24:27.461819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.653 [2024-11-20 16:24:27.469230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.653 [2024-11-20 16:24:27.469247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.653 [2024-11-20 16:24:27.469253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.654 [2024-11-20 16:24:27.479418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.654 [2024-11-20 16:24:27.479434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.654 [2024-11-20 16:24:27.479441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.654 [2024-11-20 16:24:27.488478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.654 [2024-11-20 16:24:27.488494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.654 [2024-11-20 16:24:27.488501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.654 [2024-11-20 16:24:27.497155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.654 [2024-11-20 16:24:27.497175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.654 [2024-11-20 16:24:27.497181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.654 [2024-11-20 16:24:27.505540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.654 [2024-11-20 16:24:27.505557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.654 [2024-11-20 16:24:27.505563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.654 [2024-11-20 16:24:27.514784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.654 [2024-11-20 16:24:27.514800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.654 [2024-11-20 16:24:27.514810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.654 [2024-11-20 16:24:27.522815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.654 [2024-11-20 16:24:27.522831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.654 [2024-11-20 16:24:27.522838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.654 [2024-11-20 16:24:27.532086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.654 [2024-11-20 16:24:27.532103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.654 [2024-11-20 16:24:27.532109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.654 [2024-11-20 16:24:27.541109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.654 [2024-11-20 16:24:27.541126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.654 [2024-11-20 16:24:27.541133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.654 [2024-11-20 16:24:27.550606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.654 [2024-11-20 16:24:27.550622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.654 [2024-11-20 16:24:27.550628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.654 [2024-11-20 16:24:27.557846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.654 [2024-11-20 16:24:27.557862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.654 [2024-11-20 16:24:27.557868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.654 [2024-11-20 16:24:27.567046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.654 [2024-11-20 16:24:27.567063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.654 [2024-11-20 16:24:27.567069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.654 [2024-11-20 16:24:27.576456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.654 [2024-11-20 16:24:27.576473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.654 [2024-11-20 16:24:27.576479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.654 [2024-11-20 16:24:27.585270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.654 [2024-11-20 16:24:27.585287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.654 [2024-11-20 16:24:27.585293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.916 [2024-11-20 16:24:27.594916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.916 [2024-11-20 16:24:27.594936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.916 [2024-11-20 16:24:27.594942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.916 [2024-11-20 16:24:27.602580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.916 [2024-11-20 16:24:27.602596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.916 [2024-11-20 16:24:27.602603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.916 [2024-11-20 16:24:27.614259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.916 [2024-11-20 16:24:27.614275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.916 [2024-11-20 16:24:27.614281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.916 [2024-11-20 16:24:27.622280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.916 [2024-11-20 16:24:27.622297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.916 [2024-11-20 16:24:27.622303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.916 [2024-11-20 16:24:27.631744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.916 [2024-11-20 16:24:27.631761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.916 [2024-11-20 16:24:27.631767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.640831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.640848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.640854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.649667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.649684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.649690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.658723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.658740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.658747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.666466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.666482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.666489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.675644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.675661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.675668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.683847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.683864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.683870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.693041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.693059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.693066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.701678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.701695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.701701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.712222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.712238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.712245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.720731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.720748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.720754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.729250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.729267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.729273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.738910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.738926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.738932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.747459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.747476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.747485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.755897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.755913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.755920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.764377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.764394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.764400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.774563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.774579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.774585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.783231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.783247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.783254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.794060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.794077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.794084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.802880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.802896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.802902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.813070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.813086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.813092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.821066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.821083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.821089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.829711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.829727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.829734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.838587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.838604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.838610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.917 [2024-11-20 16:24:27.847979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:51.917 [2024-11-20 16:24:27.847996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.917 [2024-11-20 16:24:27.848002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.179 [2024-11-20 16:24:27.856822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:27.856839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:27.856845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:27.864976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:27.864993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:27.864999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:27.873560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:27.873578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:27.873584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:27.883098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:27.883114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:27.883121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:27.891889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:27.891905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:27.891912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:27.901697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:27.901714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:27.901723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:27.909849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:27.909866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:27.909872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:27.918567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:27.918584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:27.918590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:27.928001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:27.928018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:27.928025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:27.937185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:27.937201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:27.937207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:27.946110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:27.946126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:27.946133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 27919.00 IOPS, 109.06 MiB/s [2024-11-20T15:24:28.116Z] [2024-11-20 16:24:27.957688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:27.957704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:27.957711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:27.968025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:27.968042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:27.968048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:27.976673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:27.976689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:27.976696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:27.987340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:27.987359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:27.987365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:27.996394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:27.996411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:27.996418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:28.005806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:28.005823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:28.005829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:28.014771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:28.014787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:28.014794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:28.022811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:28.022828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:28.022834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:28.033056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:28.033073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:28.033079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:28.041241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:28.041257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:28.041264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:28.050734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:28.050751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:28.050757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:28.060451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:28.060467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:28.060473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:28.067685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:28.067702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:28.067708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:28.077188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:28.077205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:28.077211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:28.086095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:28.086112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.180 [2024-11-20 16:24:28.086118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.180 [2024-11-20 16:24:28.095705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.180 [2024-11-20 16:24:28.095721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.181 [2024-11-20 16:24:28.095728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.181 [2024-11-20 16:24:28.108257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.181 [2024-11-20 16:24:28.108274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.181 [2024-11-20 16:24:28.108280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.442 [2024-11-20 16:24:28.115950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.442 [2024-11-20 16:24:28.115967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.442 [2024-11-20 16:24:28.115973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.442 [2024-11-20 16:24:28.127233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.442 [2024-11-20 16:24:28.127250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.442 [2024-11-20 16:24:28.127256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.442 [2024-11-20 16:24:28.137035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.442 [2024-11-20 16:24:28.137052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.442 [2024-11-20 16:24:28.137058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.442 [2024-11-20 16:24:28.146401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.442 [2024-11-20 16:24:28.146416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.442 [2024-11-20 16:24:28.146426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.442 [2024-11-20 16:24:28.155449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.442 [2024-11-20 16:24:28.155465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.442 [2024-11-20 16:24:28.155471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.442 [2024-11-20 16:24:28.163630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.163646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.163653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.173658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.173675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.173681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.180834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.180851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.180857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.190022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.190039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.190045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.199725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.199741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.199747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.208072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.208088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.208095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.217258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.217275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.217281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.226276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.226293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.226299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.236122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.236138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.236144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.244453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.244470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.244476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.253992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.254008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.254014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.262420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.262436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.262442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.270506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.270523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.270529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.279738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.279755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.279761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.289312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.289328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.289335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.298338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.298354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.298364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.305908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.305924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.305931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.314594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.314610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.314617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.323668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.323685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.323691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.333745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.333762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.333769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.345331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.345348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.345354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.354660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.354676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.354682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.363507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.363524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.363530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.443 [2024-11-20 16:24:28.372323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.443 [2024-11-20 16:24:28.372341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.443 [2024-11-20 16:24:28.372347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.706 [2024-11-20 16:24:28.381681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.706 [2024-11-20 16:24:28.381701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.706 [2024-11-20 16:24:28.381708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.706 [2024-11-20 16:24:28.390451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.706 [2024-11-20 16:24:28.390467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.706 [2024-11-20 16:24:28.390473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.706 [2024-11-20 16:24:28.399388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.706 [2024-11-20 16:24:28.399405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.706 [2024-11-20 16:24:28.399411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.706 [2024-11-20 16:24:28.408945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.706 [2024-11-20 16:24:28.408963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.706 [2024-11-20 16:24:28.408969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.706 [2024-11-20 16:24:28.416466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.706 [2024-11-20 16:24:28.416483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.706 [2024-11-20 16:24:28.416489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.706 [2024-11-20 16:24:28.427946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.706 [2024-11-20 16:24:28.427962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.706 [2024-11-20 16:24:28.427969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.706 [2024-11-20 16:24:28.437408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.706 [2024-11-20 16:24:28.437425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.706 [2024-11-20 16:24:28.437431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.706 [2024-11-20 16:24:28.445954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.706 [2024-11-20 16:24:28.445971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.706 [2024-11-20 16:24:28.445978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.706 [2024-11-20 16:24:28.454570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.706 [2024-11-20 16:24:28.454587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.706 [2024-11-20 16:24:28.454593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.706 [2024-11-20 16:24:28.463772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.706 [2024-11-20 16:24:28.463789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.706 [2024-11-20 16:24:28.463796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.706 [2024-11-20 16:24:28.472798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.706 [2024-11-20 16:24:28.472815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.706 [2024-11-20 16:24:28.472821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.707 [2024-11-20 16:24:28.482071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.707 [2024-11-20 16:24:28.482088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.707 [2024-11-20 16:24:28.482094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.707 [2024-11-20 16:24:28.491263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.707 [2024-11-20 16:24:28.491280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.707 [2024-11-20 16:24:28.491286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.707 [2024-11-20 16:24:28.501375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.707 [2024-11-20 16:24:28.501392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.707 [2024-11-20 16:24:28.501398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.707 [2024-11-20 16:24:28.511487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.707 [2024-11-20 16:24:28.511503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.707 [2024-11-20 16:24:28.511510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.707 [2024-11-20 16:24:28.520461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.707 [2024-11-20 16:24:28.520478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.707 [2024-11-20 16:24:28.520484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.707 [2024-11-20 16:24:28.528253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.707 [2024-11-20 16:24:28.528270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.707 [2024-11-20 16:24:28.528276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.707 [2024-11-20 16:24:28.537534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.707 [2024-11-20 16:24:28.537551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.707 [2024-11-20 16:24:28.537560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.707 [2024-11-20 16:24:28.546874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.707 [2024-11-20 16:24:28.546891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.707 [2024-11-20 16:24:28.546898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.707 [2024-11-20 16:24:28.554863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.707 [2024-11-20 16:24:28.554879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.707 [2024-11-20 16:24:28.554886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.707 [2024-11-20 16:24:28.564110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.707 [2024-11-20 16:24:28.564126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.707 [2024-11-20 16:24:28.564133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.707 [2024-11-20 16:24:28.572805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.707 [2024-11-20 16:24:28.572822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.707 [2024-11-20 16:24:28.572828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.707 [2024-11-20 16:24:28.581992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.707 [2024-11-20 16:24:28.582009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.707 [2024-11-20 16:24:28.582015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.707 [2024-11-20 16:24:28.590450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.707 [2024-11-20 16:24:28.590466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.707 [2024-11-20 16:24:28.590473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.707 [2024-11-20 16:24:28.600602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.707 [2024-11-20 16:24:28.600619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.707 [2024-11-20 16:24:28.600625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.707 [2024-11-20 16:24:28.608562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.707 [2024-11-20 16:24:28.608579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.707 [2024-11-20 16:24:28.608585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.707 [2024-11-20 16:24:28.618564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.707 [2024-11-20 16:24:28.618581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.707 [2024-11-20 16:24:28.618587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.707 [2024-11-20 16:24:28.626150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.707 [2024-11-20 16:24:28.626171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.707 [2024-11-20 16:24:28.626178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.707 [2024-11-20 16:24:28.634835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.707 [2024-11-20 16:24:28.634852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.707 [2024-11-20 16:24:28.634858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.969 [2024-11-20 16:24:28.644708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.969 [2024-11-20 16:24:28.644725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.644731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.652612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.652629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.652635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.661986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.662004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.662010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.670734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.670751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.670757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.682741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.682758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.682765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.691787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.691803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.691813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.700865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.700882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.700888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.708954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.708971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.708977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.717995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.718012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.718018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.727401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.727418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.727424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.735885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.735902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.735909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.744742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.744760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.744766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.754694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.754711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.754718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.763311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.763328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.763334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.771617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.771636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.771643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.780331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.780348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.780354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.790348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.790366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.790372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.798880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.798896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.798903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.807864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.807880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.807887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.815925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.815941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.815947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.826338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.826354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.826361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.834482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.834499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.834505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.844275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.844292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.844298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.852874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.852891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.852897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.861467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.861484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.861490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.871050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.871067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.871074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.970 [2024-11-20 16:24:28.879598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.970 [2024-11-20 16:24:28.879615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.970 [2024-11-20 16:24:28.879621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.971 [2024-11-20 16:24:28.888736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.971 [2024-11-20 16:24:28.888753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.971 [2024-11-20 16:24:28.888759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.971 [2024-11-20 16:24:28.897687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:52.971 [2024-11-20 16:24:28.897704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.971 [2024-11-20 16:24:28.897710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.232 [2024-11-20 16:24:28.906358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:53.232 [2024-11-20 16:24:28.906374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.232 [2024-11-20 16:24:28.906380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.232 [2024-11-20 16:24:28.914278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:53.232 [2024-11-20 16:24:28.914294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.232 [2024-11-20 16:24:28.914301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.232 [2024-11-20 16:24:28.924285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:53.232 [2024-11-20 16:24:28.924302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.232 [2024-11-20 16:24:28.924311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.232 [2024-11-20 16:24:28.936339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:53.232 [2024-11-20 16:24:28.936356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.232 [2024-11-20 16:24:28.936363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.232 [2024-11-20 16:24:28.946328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:53.232 [2024-11-20 16:24:28.946345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.232 [2024-11-20 16:24:28.946351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.232 27881.50 IOPS, 108.91 MiB/s [2024-11-20T15:24:29.168Z] [2024-11-20 16:24:28.955614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aa50e0) 00:29:53.232 [2024-11-20 16:24:28.955630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.232 [2024-11-20 16:24:28.955636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.232 00:29:53.232 Latency(us) 00:29:53.232 [2024-11-20T15:24:29.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.232 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:53.232 nvme0n1 : 2.00 27892.24 108.95 0.00 0.00 4583.55 2348.37 13707.95 00:29:53.232 [2024-11-20T15:24:29.168Z] =================================================================================================================== 00:29:53.232 [2024-11-20T15:24:29.168Z] Total : 27892.24 108.95 0.00 0.00 4583.55 2348.37 13707.95 00:29:53.232 { 00:29:53.232 "results": [ 00:29:53.232 { 00:29:53.232 "job": "nvme0n1", 00:29:53.232 "core_mask": "0x2", 00:29:53.232 "workload": "randread", 00:29:53.232 "status": "finished", 00:29:53.232 "queue_depth": 128, 00:29:53.232 "io_size": 4096, 00:29:53.232 "runtime": 2.003819, 00:29:53.232 "iops": 27892.239768162694, 00:29:53.232 "mibps": 108.95406159438552, 00:29:53.232 "io_failed": 0, 00:29:53.232 "io_timeout": 0, 00:29:53.232 "avg_latency_us": 4583.545971504059, 00:29:53.232 "min_latency_us": 2348.3733333333334, 00:29:53.232 "max_latency_us": 13707.946666666667 00:29:53.232 } 00:29:53.232 ], 00:29:53.232 "core_count": 1 00:29:53.232 } 00:29:53.232 16:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:53.232 16:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:53.232 16:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:53.232 | .driver_specific 00:29:53.232 | .nvme_error 00:29:53.232 | .status_code 00:29:53.232 | .command_transient_transport_error' 00:29:53.232 16:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:53.232 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 219 > 0 )) 00:29:53.232 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1458134 00:29:53.232 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1458134 ']' 00:29:53.232 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1458134 00:29:53.232 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:53.232 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:53.492 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1458134 00:29:53.492 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:53.492 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:53.492 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1458134' 00:29:53.492 killing process with pid 1458134 00:29:53.492 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1458134 00:29:53.492 Received shutdown signal, test time was about 2.000000 seconds 00:29:53.492 00:29:53.492 Latency(us) 00:29:53.492 [2024-11-20T15:24:29.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.492 [2024-11-20T15:24:29.428Z] =================================================================================================================== 00:29:53.492 [2024-11-20T15:24:29.428Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:53.492 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1458134 00:29:53.492 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:53.492 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:53.492 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:53.492 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:53.492 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:53.492 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1458819 00:29:53.492 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1458819 /var/tmp/bperf.sock 00:29:53.492 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1458819 ']' 00:29:53.492 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:53.492 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:53.492 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.492 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:53.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:53.492 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.492 16:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:53.492 [2024-11-20 16:24:29.376757] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:29:53.492 [2024-11-20 16:24:29.376813] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458819 ] 00:29:53.492 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:53.492 Zero copy mechanism will not be used. 00:29:53.752 [2024-11-20 16:24:29.457610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.752 [2024-11-20 16:24:29.486100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.337 16:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:54.337 16:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:54.337 16:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:54.337 16:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:54.598 16:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:54.598 16:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.598 16:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:54.598 16:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.598 16:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:54.598 16:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:54.860 nvme0n1 00:29:54.860 16:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:54.860 16:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.860 16:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:54.860 16:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.860 16:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:54.860 16:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:54.860 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:54.860 Zero copy mechanism will not be used. 00:29:54.860 Running I/O for 2 seconds... 00:29:54.860 [2024-11-20 16:24:30.685343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:54.860 [2024-11-20 16:24:30.685376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.860 [2024-11-20 16:24:30.685385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.860 [2024-11-20 16:24:30.694446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:54.860 [2024-11-20 16:24:30.694470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.860 [2024-11-20 16:24:30.694478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.860 [2024-11-20 16:24:30.702886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:54.860 [2024-11-20 16:24:30.702906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.860 [2024-11-20 16:24:30.702913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.860 [2024-11-20 16:24:30.713752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:54.860 [2024-11-20 16:24:30.713772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.860 [2024-11-20 16:24:30.713778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.860 [2024-11-20 16:24:30.726301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:54.860 [2024-11-20 16:24:30.726319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.860 [2024-11-20 16:24:30.726325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.860 [2024-11-20 16:24:30.735136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:54.860 [2024-11-20 16:24:30.735154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.860 [2024-11-20 16:24:30.735172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.860 [2024-11-20 16:24:30.740055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:54.860 [2024-11-20 16:24:30.740074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.860 [2024-11-20 16:24:30.740080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.860 [2024-11-20 16:24:30.747986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:54.860 [2024-11-20 16:24:30.748004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.860 [2024-11-20 16:24:30.748011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:54.860 [2024-11-20 16:24:30.758502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:54.860 [2024-11-20 16:24:30.758520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.860 [2024-11-20 16:24:30.758526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:54.860 [2024-11-20 16:24:30.769163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:54.860 [2024-11-20 16:24:30.769182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.860 [2024-11-20 16:24:30.769189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:54.860 [2024-11-20 16:24:30.780725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:54.860 [2024-11-20 16:24:30.780744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.860 [2024-11-20 16:24:30.780750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:54.860 [2024-11-20 16:24:30.790580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:54.860 [2024-11-20 16:24:30.790597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.860 [2024-11-20 16:24:30.790604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.121 [2024-11-20 16:24:30.796002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.121 [2024-11-20 16:24:30.796022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:30.796032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:30.798579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:30.798597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:30.798603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:30.806974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:30.806992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:30.806999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:30.815690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:30.815707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:30.815713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:30.827332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:30.827350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:30.827356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:30.830479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:30.830497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:30.830503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:30.835723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:30.835741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:30.835748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:30.845945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:30.845964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:30.845971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:30.856284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:30.856302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:30.856309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:30.866216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:30.866237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:30.866243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:30.875838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:30.875857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:30.875863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:30.886835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:30.886853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:30.886859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:30.896303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:30.896322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:30.896328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:30.907934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:30.907952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:30.907959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:30.918873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:30.918891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:30.918897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:30.930945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:30.930964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:30.930970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:30.942426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:30.942444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:30.942450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:30.955246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:30.955265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:30.955271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:30.966774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:30.966792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:30.966799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:30.976155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:30.976181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:30.976187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:30.987234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:30.987252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:30.987259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:30.998027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:30.998045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:30.998052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:31.004060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:31.004078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:31.004085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:31.016090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:31.016108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:31.016115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:31.027909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:31.027927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:31.027934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:31.039870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:31.039889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:31.039895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.122 [2024-11-20 16:24:31.051223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.122 [2024-11-20 16:24:31.051242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.122 [2024-11-20 16:24:31.051252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.384 [2024-11-20 16:24:31.063435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.384 [2024-11-20 16:24:31.063454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.384 [2024-11-20 16:24:31.063460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.384 [2024-11-20 16:24:31.073412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.384 [2024-11-20 16:24:31.073430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.384 [2024-11-20 16:24:31.073436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.384 [2024-11-20 16:24:31.079495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.384 [2024-11-20 16:24:31.079513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.384 [2024-11-20 16:24:31.079519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.384 [2024-11-20 16:24:31.088803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.384 [2024-11-20 16:24:31.088821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.384 [2024-11-20 16:24:31.088827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.384 [2024-11-20 16:24:31.098858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.384 [2024-11-20 16:24:31.098876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.384 [2024-11-20 16:24:31.098882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.384 [2024-11-20 16:24:31.108485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.384 [2024-11-20 16:24:31.108502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.384 [2024-11-20 16:24:31.108509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.384 [2024-11-20 16:24:31.114506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.384 [2024-11-20 16:24:31.114524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.384 [2024-11-20 16:24:31.114530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.384 [2024-11-20 16:24:31.119320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.384 [2024-11-20 16:24:31.119338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.384 [2024-11-20 16:24:31.119344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.384 [2024-11-20 16:24:31.125683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.384 [2024-11-20 16:24:31.125705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.384 [2024-11-20 16:24:31.125711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.384 [2024-11-20 16:24:31.133163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.384 [2024-11-20 16:24:31.133181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.384 [2024-11-20 16:24:31.133187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.384 [2024-11-20 16:24:31.140001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.384 [2024-11-20 16:24:31.140018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.384 [2024-11-20 16:24:31.140024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.384 [2024-11-20 16:24:31.146999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.384 [2024-11-20 16:24:31.147018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.384 [2024-11-20 16:24:31.147024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.384 [2024-11-20 16:24:31.157305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.384 [2024-11-20 16:24:31.157323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.384 [2024-11-20 16:24:31.157329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.384 [2024-11-20 16:24:31.166492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.384 [2024-11-20 16:24:31.166510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.384 [2024-11-20 16:24:31.166516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.384 [2024-11-20 16:24:31.174944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.384 [2024-11-20 16:24:31.174962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.384 [2024-11-20 16:24:31.174969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.385 [2024-11-20 16:24:31.183323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.385 [2024-11-20 16:24:31.183341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.385 [2024-11-20 16:24:31.183347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.385 [2024-11-20 16:24:31.194830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.385 [2024-11-20 16:24:31.194848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.385 [2024-11-20 16:24:31.194854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.385 [2024-11-20 16:24:31.207341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.385 [2024-11-20 16:24:31.207359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.385 [2024-11-20 16:24:31.207365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.385 [2024-11-20 16:24:31.218711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.385 [2024-11-20 16:24:31.218729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.385 [2024-11-20 16:24:31.218735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.385 [2024-11-20 16:24:31.225957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.385 [2024-11-20 16:24:31.225975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.385 [2024-11-20 16:24:31.225982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.385 [2024-11-20 16:24:31.235407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.385 [2024-11-20 16:24:31.235425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.385 [2024-11-20 16:24:31.235431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.385 [2024-11-20 16:24:31.240220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.385 [2024-11-20 16:24:31.240238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.385 [2024-11-20 16:24:31.240244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.385 [2024-11-20 16:24:31.251061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.385 [2024-11-20 16:24:31.251079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.385 [2024-11-20 16:24:31.251085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.385 [2024-11-20 16:24:31.258339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.385 [2024-11-20 16:24:31.258357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.385 [2024-11-20 16:24:31.258363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.385 [2024-11-20 16:24:31.266229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.385 [2024-11-20 16:24:31.266247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.385 [2024-11-20 16:24:31.266253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.385 [2024-11-20 16:24:31.276861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.385 [2024-11-20 16:24:31.276878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.385 [2024-11-20 16:24:31.276887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.385 [2024-11-20 16:24:31.285426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.385 [2024-11-20 16:24:31.285444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.385 [2024-11-20 16:24:31.285451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.385 [2024-11-20 16:24:31.293279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.385 [2024-11-20 16:24:31.293297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.385 [2024-11-20 16:24:31.293304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.385 [2024-11-20 16:24:31.299100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.385 [2024-11-20 16:24:31.299119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.385 [2024-11-20 16:24:31.299125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.385 [2024-11-20 16:24:31.307860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.385 [2024-11-20 16:24:31.307878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.385 [2024-11-20 16:24:31.307884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.385 [2024-11-20 16:24:31.314656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.385 [2024-11-20 16:24:31.314675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.385 [2024-11-20 16:24:31.314681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.647 [2024-11-20 16:24:31.322753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.647 [2024-11-20 16:24:31.322772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.647 [2024-11-20 16:24:31.322778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.647 [2024-11-20 16:24:31.330676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.647 [2024-11-20 16:24:31.330694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.647 [2024-11-20 16:24:31.330701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.647 [2024-11-20 16:24:31.337519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.647 [2024-11-20 16:24:31.337537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.647 [2024-11-20 16:24:31.337543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.647 [2024-11-20 16:24:31.347608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.647 [2024-11-20 16:24:31.347626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.647 [2024-11-20 16:24:31.347633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.647 [2024-11-20 16:24:31.358312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.647 [2024-11-20 16:24:31.358330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.647 [2024-11-20 16:24:31.358337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.647 [2024-11-20 16:24:31.370126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.647 [2024-11-20 16:24:31.370144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.647 [2024-11-20 16:24:31.370150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.647 [2024-11-20 16:24:31.382030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.647 [2024-11-20 16:24:31.382048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.647 [2024-11-20 16:24:31.382054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.647 [2024-11-20 16:24:31.393778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.647 [2024-11-20 16:24:31.393796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.647 [2024-11-20 16:24:31.393802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.647 [2024-11-20 16:24:31.405833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.405851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.405857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.415774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.415792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.415799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.423687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.423705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.423711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.428286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.428303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.428313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.437658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.437676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.437682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.444389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.444407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.444414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.449941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.449959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.449965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.454456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.454474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.454480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.459722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.459740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.459747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.469224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.469241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.469248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.476084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.476101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.476108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.482501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.482519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.482525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.491798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.491819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.491825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.496251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.496269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.496275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.500583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.500601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.500608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.505146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.505170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.505176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.509546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.509564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.509570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.514198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.514215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.514221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.520242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.520260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.520266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.524834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.524852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.524858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.529393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.529411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.529417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.540470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.540488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.540494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.547279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.547297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.547303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.557588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.557606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.557612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.563893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.563911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.563917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.574329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.574346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.574353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.648 [2024-11-20 16:24:31.578282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.648 [2024-11-20 16:24:31.578301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.648 [2024-11-20 16:24:31.578307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.910 [2024-11-20 16:24:31.582239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.910 [2024-11-20 16:24:31.582257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.910 [2024-11-20 16:24:31.582264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.910 [2024-11-20 16:24:31.587811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.910 [2024-11-20 16:24:31.587829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.910 [2024-11-20 16:24:31.587835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.910 [2024-11-20 16:24:31.590550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.590566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.590576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.599437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.599455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.599461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.610576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.610593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.610599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.620228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.620246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.620252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.628210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.628227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.628233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.633507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.633524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.633531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.641644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.641662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.641668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.648959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.648977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.648983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.654100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.654118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.654124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.660842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.660866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.660872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.668455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.668473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.668479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.677404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.677421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.677427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.911 3658.00 IOPS, 457.25 MiB/s [2024-11-20T15:24:31.847Z] [2024-11-20 16:24:31.688866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.688883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.688890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.700740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.700757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.700763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.712458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.712475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.712481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.723456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.723474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.723480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.736047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.736065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.736071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.748643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.748661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.748667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.760189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.760206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.760212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.768628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.768645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.768652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.780604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.780621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.780627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.785967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.785984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.785990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.790485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.790502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.790508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.800890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.800907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.800913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.812447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.812463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.812469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.818498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.818515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.911 [2024-11-20 16:24:31.818521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:55.911 [2024-11-20 16:24:31.828778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.911 [2024-11-20 16:24:31.828796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.912 [2024-11-20 16:24:31.828805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:55.912 [2024-11-20 16:24:31.834251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:55.912 [2024-11-20 16:24:31.834268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.912 [2024-11-20 16:24:31.834274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.173 [2024-11-20 16:24:31.844788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.173 [2024-11-20 16:24:31.844806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.173 [2024-11-20 16:24:31.844812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.173 [2024-11-20 16:24:31.855369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.173 [2024-11-20 16:24:31.855387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.173 [2024-11-20 16:24:31.855393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.173 [2024-11-20 16:24:31.863261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.173 [2024-11-20 16:24:31.863279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:31.863285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:31.869758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:31.869775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:31.869781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:31.875651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:31.875669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:31.875675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:31.883711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:31.883728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:31.883735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:31.892008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:31.892025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:31.892031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:31.897039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:31.897056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:31.897062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:31.901466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:31.901484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:31.901490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:31.906874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:31.906891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:31.906897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:31.914517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:31.914535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:31.914541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:31.920858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:31.920875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:31.920881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:31.926947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:31.926965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:31.926971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:31.934695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:31.934714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:31.934720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:31.939562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:31.939580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:31.939586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:31.944135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:31.944153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:31.944167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:31.949421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:31.949439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:31.949446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:31.953778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:31.953796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:31.953803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:31.958281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:31.958300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:31.958307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:31.964681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:31.964699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:31.964706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:31.975892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:31.975911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:31.975917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:31.984681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:31.984698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:31.984705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:31.989243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:31.989261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:31.989267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:31.994754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:31.994772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:31.994778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:32.000682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:32.000703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:32.000710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:32.009347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:32.009366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:32.009372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:32.017810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:32.017828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:32.017835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:32.022372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:32.022390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:32.022396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:32.026784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.174 [2024-11-20 16:24:32.026803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.174 [2024-11-20 16:24:32.026809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.174 [2024-11-20 16:24:32.034085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.175 [2024-11-20 16:24:32.034103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.175 [2024-11-20 16:24:32.034109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.175 [2024-11-20 16:24:32.043362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.175 [2024-11-20 16:24:32.043380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.175 [2024-11-20 16:24:32.043386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.175 [2024-11-20 16:24:32.050300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.175 [2024-11-20 16:24:32.050318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.175 [2024-11-20 16:24:32.050324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.175 [2024-11-20 16:24:32.062657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.175 [2024-11-20 16:24:32.062675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.175 [2024-11-20 16:24:32.062682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.175 [2024-11-20 16:24:32.070999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.175 [2024-11-20 16:24:32.071017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.175 [2024-11-20 16:24:32.071023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.175 [2024-11-20 16:24:32.077498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.175 [2024-11-20 16:24:32.077517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.175 [2024-11-20 16:24:32.077523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.175 [2024-11-20 16:24:32.083331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.175 [2024-11-20 16:24:32.083350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.175 [2024-11-20 16:24:32.083356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.175 [2024-11-20 16:24:32.090737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.175 [2024-11-20 16:24:32.090756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.175 [2024-11-20 16:24:32.090762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.175 [2024-11-20 16:24:32.095348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.175 [2024-11-20 16:24:32.095367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.175 [2024-11-20 16:24:32.095374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.175 [2024-11-20 16:24:32.100585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.175 [2024-11-20 16:24:32.100604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.175 [2024-11-20 16:24:32.100611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.175 [2024-11-20 16:24:32.105858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.175 [2024-11-20 16:24:32.105877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.175 [2024-11-20 16:24:32.105883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.116927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.116946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.116953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.128617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.128636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.128646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.139840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.139858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.139865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.150261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.150279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.150286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.160228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.160246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.160253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.170984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.171002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.171009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.180176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.180194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.180201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.186521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.186540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.186546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.196685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.196703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.196709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.208381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.208399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.208406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.217820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.217841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.217847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.226774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.226792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.226799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.237580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.237598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.237605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.249344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.249362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.249368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.261976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.261993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.261999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.274351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.274369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.274375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.286321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.286339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.286346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.297203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.297221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.297228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.309117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.309136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.309142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.318844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.318862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.318868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.329604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.329622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.329628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.340262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.340280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.340287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.351530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.351548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.351555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.437 [2024-11-20 16:24:32.361398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.437 [2024-11-20 16:24:32.361417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.437 [2024-11-20 16:24:32.361424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.699 [2024-11-20 16:24:32.371937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.699 [2024-11-20 16:24:32.371955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.699 [2024-11-20 16:24:32.371962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.699 [2024-11-20 16:24:32.384143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.699 [2024-11-20 16:24:32.384167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.699 [2024-11-20 16:24:32.384174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.699 [2024-11-20 16:24:32.394847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.699 [2024-11-20 16:24:32.394866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.699 [2024-11-20 16:24:32.394872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.699 [2024-11-20 16:24:32.405012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.699 [2024-11-20 16:24:32.405030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.699 [2024-11-20 16:24:32.405039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.699 [2024-11-20 16:24:32.416119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.699 [2024-11-20 16:24:32.416137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.699 [2024-11-20 16:24:32.416144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.699 [2024-11-20 16:24:32.426340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.699 [2024-11-20 16:24:32.426358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.699 [2024-11-20 16:24:32.426364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.699 [2024-11-20 16:24:32.435276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.699 [2024-11-20 16:24:32.435294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.699 [2024-11-20 16:24:32.435300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.699 [2024-11-20 16:24:32.445150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.699 [2024-11-20 16:24:32.445174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.699 [2024-11-20 16:24:32.445180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.699 [2024-11-20 16:24:32.455928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.699 [2024-11-20 16:24:32.455947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.699 [2024-11-20 16:24:32.455953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.699 [2024-11-20 16:24:32.467610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.699 [2024-11-20 16:24:32.467628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.699 [2024-11-20 16:24:32.467635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.699 [2024-11-20 16:24:32.480014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.699 [2024-11-20 16:24:32.480033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.699 [2024-11-20 16:24:32.480039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.699 [2024-11-20 16:24:32.492273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.699 [2024-11-20 16:24:32.492292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.699 [2024-11-20 16:24:32.492298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.699 [2024-11-20 16:24:32.504713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.699 [2024-11-20 16:24:32.504732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.699 [2024-11-20 16:24:32.504738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.700 [2024-11-20 16:24:32.516938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.700 [2024-11-20 16:24:32.516956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.700 [2024-11-20 16:24:32.516962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.700 [2024-11-20 16:24:32.529779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.700 [2024-11-20 16:24:32.529797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.700 [2024-11-20 16:24:32.529804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.700 [2024-11-20 16:24:32.542930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.700 [2024-11-20 16:24:32.542949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.700 [2024-11-20 16:24:32.542955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.700 [2024-11-20 16:24:32.555121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.700 [2024-11-20 16:24:32.555140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.700 [2024-11-20 16:24:32.555146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.700 [2024-11-20 16:24:32.568117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.700 [2024-11-20 16:24:32.568135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.700 [2024-11-20 16:24:32.568142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.700 [2024-11-20 16:24:32.581031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.700 [2024-11-20 16:24:32.581050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.700 [2024-11-20 16:24:32.581056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.700 [2024-11-20 16:24:32.592719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.700 [2024-11-20 16:24:32.592737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.700 [2024-11-20 16:24:32.592744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.700 [2024-11-20 16:24:32.604827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.700 [2024-11-20 16:24:32.604845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.700 [2024-11-20 16:24:32.604854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.700 [2024-11-20 16:24:32.617128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.700 [2024-11-20 16:24:32.617146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.700 [2024-11-20 16:24:32.617153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.700 [2024-11-20 16:24:32.629494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.700 [2024-11-20 16:24:32.629513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.700 [2024-11-20 16:24:32.629521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.961 [2024-11-20 16:24:32.638233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.961 [2024-11-20 16:24:32.638251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.961 [2024-11-20 16:24:32.638258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.961 [2024-11-20 16:24:32.649046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.961 [2024-11-20 16:24:32.649064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.961 [2024-11-20 16:24:32.649071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.961 [2024-11-20 16:24:32.657563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.961 [2024-11-20 16:24:32.657581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.961 [2024-11-20 16:24:32.657588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.961 [2024-11-20 16:24:32.669228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.961 [2024-11-20 16:24:32.669246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.961 [2024-11-20 16:24:32.669254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.961 [2024-11-20 16:24:32.681682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb61750) 00:29:56.961 [2024-11-20 16:24:32.681707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.961 [2024-11-20 16:24:32.681713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.961 3500.50 IOPS, 437.56 MiB/s 00:29:56.961 Latency(us) 00:29:56.961 [2024-11-20T15:24:32.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:56.961 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:56.961 nvme0n1 : 2.01 3503.63 437.95 0.00 0.00 4562.57 826.03 13161.81 00:29:56.961 [2024-11-20T15:24:32.897Z] =================================================================================================================== 00:29:56.961 [2024-11-20T15:24:32.897Z] Total : 3503.63 437.95 0.00 0.00 4562.57 826.03 13161.81 00:29:56.961 { 00:29:56.961 "results": [ 00:29:56.961 { 00:29:56.961 "job": "nvme0n1", 00:29:56.961 "core_mask": "0x2", 00:29:56.961 "workload": "randread", 00:29:56.961 "status": "finished", 00:29:56.961 "queue_depth": 16, 00:29:56.961 "io_size": 131072, 00:29:56.961 "runtime": 2.006489, 00:29:56.961 "iops": 3503.6324644690303, 00:29:56.961 "mibps": 437.9540580586288, 00:29:56.961 "io_failed": 0, 00:29:56.961 "io_timeout": 0, 00:29:56.961 "avg_latency_us": 4562.567950687529, 00:29:56.961 "min_latency_us": 826.0266666666666, 00:29:56.961 "max_latency_us": 13161.813333333334 00:29:56.961 } 00:29:56.961 ], 00:29:56.961 "core_count": 1 00:29:56.961 } 00:29:56.961 16:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:56.961 16:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:56.961 16:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:56.961 | .driver_specific 00:29:56.961 | .nvme_error 00:29:56.961 | .status_code 00:29:56.961 | .command_transient_transport_error' 00:29:56.961 16:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:56.961 16:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 227 > 0 )) 00:29:56.961 16:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1458819 00:29:56.961 16:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1458819 ']' 00:29:56.961 16:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1458819 00:29:56.961 16:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:57.222 16:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:57.222 16:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1458819 00:29:57.222 16:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:57.222 16:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:57.222 16:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1458819' 00:29:57.222 killing process with pid 1458819 00:29:57.222 16:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1458819 00:29:57.222 Received shutdown signal, test time was about 2.000000 seconds 00:29:57.222 00:29:57.222 Latency(us) 00:29:57.222 [2024-11-20T15:24:33.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.222 [2024-11-20T15:24:33.158Z] =================================================================================================================== 00:29:57.222 [2024-11-20T15:24:33.158Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:57.222 16:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1458819 00:29:57.222 16:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:57.222 16:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:57.222 16:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:57.222 16:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:57.222 16:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:57.222 16:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1459499 00:29:57.222 16:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1459499 /var/tmp/bperf.sock 00:29:57.222 16:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1459499 ']' 00:29:57.222 16:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:57.222 16:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:57.222 16:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:57.222 16:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:57.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:57.222 16:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:57.222 16:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:57.222 [2024-11-20 16:24:33.108258] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:29:57.222 [2024-11-20 16:24:33.108315] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1459499 ] 00:29:57.483 [2024-11-20 16:24:33.189926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.483 [2024-11-20 16:24:33.219405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.055 16:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:58.055 16:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:58.055 16:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:58.055 16:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:58.317 16:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:58.317 16:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.317 16:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:58.317 16:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.317 16:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:58.317 16:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:58.578 nvme0n1 00:29:58.578 16:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:58.578 16:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.578 16:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:58.578 16:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.578 16:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:58.578 16:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:58.839 Running I/O for 2 seconds... 00:29:58.839 [2024-11-20 16:24:34.523350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.839 [2024-11-20 16:24:34.523586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.839 [2024-11-20 16:24:34.523615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.839 [2024-11-20 16:24:34.532282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.839 [2024-11-20 16:24:34.532461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.839 [2024-11-20 16:24:34.532478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.839 [2024-11-20 16:24:34.541048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.839 [2024-11-20 16:24:34.541368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.839 [2024-11-20 16:24:34.541385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.839 [2024-11-20 16:24:34.549793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.839 [2024-11-20 16:24:34.550103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.839 [2024-11-20 16:24:34.550120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.839 [2024-11-20 16:24:34.558566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.839 [2024-11-20 16:24:34.558853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.839 [2024-11-20 16:24:34.558869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.839 [2024-11-20 16:24:34.567293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.839 [2024-11-20 16:24:34.567624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.839 [2024-11-20 16:24:34.567640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.839 [2024-11-20 16:24:34.576055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.839 [2024-11-20 16:24:34.576371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.839 [2024-11-20 16:24:34.576387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.839 [2024-11-20 16:24:34.584781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.839 [2024-11-20 16:24:34.585036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.839 [2024-11-20 16:24:34.585051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.839 [2024-11-20 16:24:34.593520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.839 [2024-11-20 16:24:34.593805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.839 [2024-11-20 16:24:34.593821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.839 [2024-11-20 16:24:34.602212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.839 [2024-11-20 16:24:34.602529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.839 [2024-11-20 16:24:34.602545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.839 [2024-11-20 16:24:34.610956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.839 [2024-11-20 16:24:34.611101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.839 [2024-11-20 16:24:34.611117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.839 [2024-11-20 16:24:34.619742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.839 [2024-11-20 16:24:34.619888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.839 [2024-11-20 16:24:34.619903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.839 [2024-11-20 16:24:34.628456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.839 [2024-11-20 16:24:34.628722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.839 [2024-11-20 16:24:34.628738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.839 [2024-11-20 16:24:34.637144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.840 [2024-11-20 16:24:34.637388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.840 [2024-11-20 16:24:34.637404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.840 [2024-11-20 16:24:34.645846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.840 [2024-11-20 16:24:34.646031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.840 [2024-11-20 16:24:34.646046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.840 [2024-11-20 16:24:34.654588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.840 [2024-11-20 16:24:34.654728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.840 [2024-11-20 16:24:34.654743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.840 [2024-11-20 16:24:34.663317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.840 [2024-11-20 16:24:34.663613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.840 [2024-11-20 16:24:34.663629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.840 [2024-11-20 16:24:34.672006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.840 [2024-11-20 16:24:34.672140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.840 [2024-11-20 16:24:34.672155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.840 [2024-11-20 16:24:34.680702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.840 [2024-11-20 16:24:34.680842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.840 [2024-11-20 16:24:34.680857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.840 [2024-11-20 16:24:34.689416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.840 [2024-11-20 16:24:34.689650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.840 [2024-11-20 16:24:34.689666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.840 [2024-11-20 16:24:34.698173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.840 [2024-11-20 16:24:34.698401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.840 [2024-11-20 16:24:34.698416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.840 [2024-11-20 16:24:34.706881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.840 [2024-11-20 16:24:34.707166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.840 [2024-11-20 16:24:34.707182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.840 [2024-11-20 16:24:34.715581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.840 [2024-11-20 16:24:34.715713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.840 [2024-11-20 16:24:34.715728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.840 [2024-11-20 16:24:34.724266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.840 [2024-11-20 16:24:34.724424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.840 [2024-11-20 16:24:34.724440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.840 [2024-11-20 16:24:34.732978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.840 [2024-11-20 16:24:34.733229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.840 [2024-11-20 16:24:34.733244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.840 [2024-11-20 16:24:34.741682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.840 [2024-11-20 16:24:34.741863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.840 [2024-11-20 16:24:34.741878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.840 [2024-11-20 16:24:34.750385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.840 [2024-11-20 16:24:34.750586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.840 [2024-11-20 16:24:34.750604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.840 [2024-11-20 16:24:34.759088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.840 [2024-11-20 16:24:34.759378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.840 [2024-11-20 16:24:34.759394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.840 [2024-11-20 16:24:34.767790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:58.840 [2024-11-20 16:24:34.767927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.840 [2024-11-20 16:24:34.767942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.102 [2024-11-20 16:24:34.776491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.102 [2024-11-20 16:24:34.776691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.102 [2024-11-20 16:24:34.776706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.102 [2024-11-20 16:24:34.785214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.102 [2024-11-20 16:24:34.785468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.102 [2024-11-20 16:24:34.785484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.102 [2024-11-20 16:24:34.793894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.102 [2024-11-20 16:24:34.794061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.102 [2024-11-20 16:24:34.794076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.102 [2024-11-20 16:24:34.802598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.102 [2024-11-20 16:24:34.802780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.102 [2024-11-20 16:24:34.802795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.102 [2024-11-20 16:24:34.811322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.102 [2024-11-20 16:24:34.811492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.102 [2024-11-20 16:24:34.811507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.102 [2024-11-20 16:24:34.820035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.102 [2024-11-20 16:24:34.820260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.102 [2024-11-20 16:24:34.820275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.102 [2024-11-20 16:24:34.828746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.102 [2024-11-20 16:24:34.829004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.103 [2024-11-20 16:24:34.829020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.103 [2024-11-20 16:24:34.837449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.103 [2024-11-20 16:24:34.837798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.103 [2024-11-20 16:24:34.837814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.103 [2024-11-20 16:24:34.846136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.103 [2024-11-20 16:24:34.846330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.103 [2024-11-20 16:24:34.846346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.103 [2024-11-20 16:24:34.854868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.103 [2024-11-20 16:24:34.855014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.103 [2024-11-20 16:24:34.855029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.103 [2024-11-20 16:24:34.863564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.103 [2024-11-20 16:24:34.863825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.103 [2024-11-20 16:24:34.863841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.103 [2024-11-20 16:24:34.872294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.103 [2024-11-20 16:24:34.872492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.103 [2024-11-20 16:24:34.872507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.103 [2024-11-20 16:24:34.881006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.103 [2024-11-20 16:24:34.881238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.103 [2024-11-20 16:24:34.881253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.103 [2024-11-20 16:24:34.889734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.103 [2024-11-20 16:24:34.889923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.103 [2024-11-20 16:24:34.889938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.103 [2024-11-20 16:24:34.898431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.103 [2024-11-20 16:24:34.898612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.103 [2024-11-20 16:24:34.898627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.103 [2024-11-20 16:24:34.907161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.103 [2024-11-20 16:24:34.907308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.103 [2024-11-20 16:24:34.907323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.103 [2024-11-20 16:24:34.915866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.103 [2024-11-20 16:24:34.916132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.103 [2024-11-20 16:24:34.916147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.103 [2024-11-20 16:24:34.924616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.103 [2024-11-20 16:24:34.924797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.103 [2024-11-20 16:24:34.924812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.103 [2024-11-20 16:24:34.933354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.103 [2024-11-20 16:24:34.933564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.103 [2024-11-20 16:24:34.933579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.103 [2024-11-20 16:24:34.942023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.103 [2024-11-20 16:24:34.942232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.103 [2024-11-20 16:24:34.942247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.103 [2024-11-20 16:24:34.950726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.103 [2024-11-20 16:24:34.951047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.103 [2024-11-20 16:24:34.951063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.103 [2024-11-20 16:24:34.959427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.103 [2024-11-20 16:24:34.959639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.103 [2024-11-20 16:24:34.959655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.103 [2024-11-20 16:24:34.968147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.103 [2024-11-20 16:24:34.968300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.103 [2024-11-20 16:24:34.968315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.103 [2024-11-20 16:24:34.976830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.103 [2024-11-20 16:24:34.977024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.103 [2024-11-20 16:24:34.977042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.104 [2024-11-20 16:24:34.985546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.104 [2024-11-20 16:24:34.985688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.104 [2024-11-20 16:24:34.985703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.104 [2024-11-20 16:24:34.994273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.104 [2024-11-20 16:24:34.994462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.104 [2024-11-20 16:24:34.994477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.104 [2024-11-20 16:24:35.003097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.104 [2024-11-20 16:24:35.003268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.104 [2024-11-20 16:24:35.003284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.104 [2024-11-20 16:24:35.011823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.104 [2024-11-20 16:24:35.012109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.104 [2024-11-20 16:24:35.012125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.104 [2024-11-20 16:24:35.020558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.104 [2024-11-20 16:24:35.020813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.104 [2024-11-20 16:24:35.020828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.104 [2024-11-20 16:24:35.029257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.104 [2024-11-20 16:24:35.029512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.104 [2024-11-20 16:24:35.029528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.366 [2024-11-20 16:24:35.037968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.366 [2024-11-20 16:24:35.038268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.366 [2024-11-20 16:24:35.038284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.366 [2024-11-20 16:24:35.046697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.366 [2024-11-20 16:24:35.046831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.366 [2024-11-20 16:24:35.046846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.366 [2024-11-20 16:24:35.055403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.366 [2024-11-20 16:24:35.055639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.366 [2024-11-20 16:24:35.055661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.366 [2024-11-20 16:24:35.064089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.366 [2024-11-20 16:24:35.064237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.366 [2024-11-20 16:24:35.064252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.366 [2024-11-20 16:24:35.072786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.366 [2024-11-20 16:24:35.072946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.366 [2024-11-20 16:24:35.072961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.366 [2024-11-20 16:24:35.081487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.366 [2024-11-20 16:24:35.081830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.366 [2024-11-20 16:24:35.081846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.366 [2024-11-20 16:24:35.090233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.366 [2024-11-20 16:24:35.090572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.366 [2024-11-20 16:24:35.090588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.366 [2024-11-20 16:24:35.098971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.366 [2024-11-20 16:24:35.099213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.366 [2024-11-20 16:24:35.099229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.366 [2024-11-20 16:24:35.107666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.366 [2024-11-20 16:24:35.107864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.366 [2024-11-20 16:24:35.107878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.366 [2024-11-20 16:24:35.116376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.366 [2024-11-20 16:24:35.116594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.366 [2024-11-20 16:24:35.116609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.366 [2024-11-20 16:24:35.125086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.366 [2024-11-20 16:24:35.125246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.366 [2024-11-20 16:24:35.125261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.366 [2024-11-20 16:24:35.133816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.366 [2024-11-20 16:24:35.134036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.366 [2024-11-20 16:24:35.134051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.366 [2024-11-20 16:24:35.142522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.366 [2024-11-20 16:24:35.142653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.366 [2024-11-20 16:24:35.142669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.366 [2024-11-20 16:24:35.151234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.366 [2024-11-20 16:24:35.151419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.366 [2024-11-20 16:24:35.151435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.366 [2024-11-20 16:24:35.159935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.366 [2024-11-20 16:24:35.160152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.366 [2024-11-20 16:24:35.160170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.366 [2024-11-20 16:24:35.168645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.366 [2024-11-20 16:24:35.168916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.366 [2024-11-20 16:24:35.168932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.366 [2024-11-20 16:24:35.177358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.366 [2024-11-20 16:24:35.177683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.366 [2024-11-20 16:24:35.177699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.366 [2024-11-20 16:24:35.186082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.366 [2024-11-20 16:24:35.186324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.366 [2024-11-20 16:24:35.186345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.367 [2024-11-20 16:24:35.194838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.367 [2024-11-20 16:24:35.195053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.367 [2024-11-20 16:24:35.195068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.367 [2024-11-20 16:24:35.203541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.367 [2024-11-20 16:24:35.203782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.367 [2024-11-20 16:24:35.203799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.367 [2024-11-20 16:24:35.212250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.367 [2024-11-20 16:24:35.212455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.367 [2024-11-20 16:24:35.212471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.367 [2024-11-20 16:24:35.220958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.367 [2024-11-20 16:24:35.221274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.367 [2024-11-20 16:24:35.221290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.367 [2024-11-20 16:24:35.229691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.367 [2024-11-20 16:24:35.229870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.367 [2024-11-20 16:24:35.229884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.367 [2024-11-20 16:24:35.238390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.367 [2024-11-20 16:24:35.238705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.367 [2024-11-20 16:24:35.238721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.367 [2024-11-20 16:24:35.247068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.367 [2024-11-20 16:24:35.247261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.367 [2024-11-20 16:24:35.247277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.367 [2024-11-20 16:24:35.255777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.367 [2024-11-20 16:24:35.255903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.367 [2024-11-20 16:24:35.255918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.367 [2024-11-20 16:24:35.264487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.367 [2024-11-20 16:24:35.264707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.367 [2024-11-20 16:24:35.264722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.367 [2024-11-20 16:24:35.273219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.367 [2024-11-20 16:24:35.273443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.367 [2024-11-20 16:24:35.273458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.367 [2024-11-20 16:24:35.281905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.367 [2024-11-20 16:24:35.282047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.367 [2024-11-20 16:24:35.282062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.367 [2024-11-20 16:24:35.290650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.367 [2024-11-20 16:24:35.290845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.367 [2024-11-20 16:24:35.290860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.629 [2024-11-20 16:24:35.299363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.629 [2024-11-20 16:24:35.299573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.629 [2024-11-20 16:24:35.299588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.629 [2024-11-20 16:24:35.308074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.629 [2024-11-20 16:24:35.308237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.629 [2024-11-20 16:24:35.308253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.629 [2024-11-20 16:24:35.316770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.629 [2024-11-20 16:24:35.316983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.629 [2024-11-20 16:24:35.316998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.629 [2024-11-20 16:24:35.325501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.629 [2024-11-20 16:24:35.325755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.629 [2024-11-20 16:24:35.325770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.629 [2024-11-20 16:24:35.334181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.629 [2024-11-20 16:24:35.334415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.629 [2024-11-20 16:24:35.334429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.629 [2024-11-20 16:24:35.342876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.629 [2024-11-20 16:24:35.343009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.629 [2024-11-20 16:24:35.343024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.629 [2024-11-20 16:24:35.351566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.629 [2024-11-20 16:24:35.351843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.629 [2024-11-20 16:24:35.351859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.629 [2024-11-20 16:24:35.360299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.629 [2024-11-20 16:24:35.360420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.629 [2024-11-20 16:24:35.360435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.629 [2024-11-20 16:24:35.369033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.629 [2024-11-20 16:24:35.369356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.629 [2024-11-20 16:24:35.369372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.629 [2024-11-20 16:24:35.377760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.629 [2024-11-20 16:24:35.377937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.629 [2024-11-20 16:24:35.377952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.629 [2024-11-20 16:24:35.386487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.629 [2024-11-20 16:24:35.386736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.629 [2024-11-20 16:24:35.386751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.629 [2024-11-20 16:24:35.395191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.629 [2024-11-20 16:24:35.395384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.629 [2024-11-20 16:24:35.395400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.629 [2024-11-20 16:24:35.403894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.629 [2024-11-20 16:24:35.404052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.629 [2024-11-20 16:24:35.404067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.629 [2024-11-20 16:24:35.412622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.629 [2024-11-20 16:24:35.412958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.629 [2024-11-20 16:24:35.412973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.629 [2024-11-20 16:24:35.421374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.629 [2024-11-20 16:24:35.421520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.629 [2024-11-20 16:24:35.421536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.629 [2024-11-20 16:24:35.430094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.629 [2024-11-20 16:24:35.430349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.629 [2024-11-20 16:24:35.430368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.630 [2024-11-20 16:24:35.438799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.630 [2024-11-20 16:24:35.439191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.630 [2024-11-20 16:24:35.439207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.630 [2024-11-20 16:24:35.447523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.630 [2024-11-20 16:24:35.447806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.630 [2024-11-20 16:24:35.447822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.630 [2024-11-20 16:24:35.456221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.630 [2024-11-20 16:24:35.456446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.630 [2024-11-20 16:24:35.456462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.630 [2024-11-20 16:24:35.464919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.630 [2024-11-20 16:24:35.465238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.630 [2024-11-20 16:24:35.465253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.630 [2024-11-20 16:24:35.473637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.630 [2024-11-20 16:24:35.473823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.630 [2024-11-20 16:24:35.473838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.630 [2024-11-20 16:24:35.482356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.630 [2024-11-20 16:24:35.482574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.630 [2024-11-20 16:24:35.482589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.630 [2024-11-20 16:24:35.491194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.630 [2024-11-20 16:24:35.491470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.630 [2024-11-20 16:24:35.491485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.630 [2024-11-20 16:24:35.499899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.630 [2024-11-20 16:24:35.500188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.630 [2024-11-20 16:24:35.500203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.630 [2024-11-20 16:24:35.508600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.630 [2024-11-20 16:24:35.508807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.630 [2024-11-20 16:24:35.508823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.630 29241.00 IOPS, 114.22 MiB/s [2024-11-20T15:24:35.566Z] [2024-11-20 16:24:35.517340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.630 [2024-11-20 16:24:35.517742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.630 [2024-11-20 16:24:35.517758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.630 [2024-11-20 16:24:35.526185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.630 [2024-11-20 16:24:35.526362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.630 [2024-11-20 16:24:35.526377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.630 [2024-11-20 16:24:35.534947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.630 [2024-11-20 16:24:35.535229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.630 [2024-11-20 16:24:35.535245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.630 [2024-11-20 16:24:35.543646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.630 [2024-11-20 16:24:35.543884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.630 [2024-11-20 16:24:35.543898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.630 [2024-11-20 16:24:35.552385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.630 [2024-11-20 16:24:35.552532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.630 [2024-11-20 16:24:35.552547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.630 [2024-11-20 16:24:35.561132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.630 [2024-11-20 16:24:35.561502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.630 [2024-11-20 16:24:35.561517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.892 [2024-11-20 16:24:35.569871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.892 [2024-11-20 16:24:35.570123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.892 [2024-11-20 16:24:35.570143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.892 [2024-11-20 16:24:35.578567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.892 [2024-11-20 16:24:35.578687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.892 [2024-11-20 16:24:35.578705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.892 [2024-11-20 16:24:35.587276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.892 [2024-11-20 16:24:35.587471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.892 [2024-11-20 16:24:35.587486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.892 [2024-11-20 16:24:35.595995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.892 [2024-11-20 16:24:35.596229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.892 [2024-11-20 16:24:35.596244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.892 [2024-11-20 16:24:35.604732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.892 [2024-11-20 16:24:35.605040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.605056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.613483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.613722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.613737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.622263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.622471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.622486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.630959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.631101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.631116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.639662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.639883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.639898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.648399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.648630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.648645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.657091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.657353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.657368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.665825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.665949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.665964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.674626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.674824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.674839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.683336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.683554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.683569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.692047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.692329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.692345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.700779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.701097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.701112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.709490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.709614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.709629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.718214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.718367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.718382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.726965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.727109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.727123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.735684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.735810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.735825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.744416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.744803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.744819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.753128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.753487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.753503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.761845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.762156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.762175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.770544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.770763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.770779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.779238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.779401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.779416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.787942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.788166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.788181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.796651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.796847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.796863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.805350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.805637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.805655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.814042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.814191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.814206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.893 [2024-11-20 16:24:35.822762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:29:59.893 [2024-11-20 16:24:35.822976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.893 [2024-11-20 16:24:35.822991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.155 [2024-11-20 16:24:35.831485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.155 [2024-11-20 16:24:35.831652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.155 [2024-11-20 16:24:35.831666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.155 [2024-11-20 16:24:35.840172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.155 [2024-11-20 16:24:35.840322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.155 [2024-11-20 16:24:35.840337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.155 [2024-11-20 16:24:35.848904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.155 [2024-11-20 16:24:35.849193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.155 [2024-11-20 16:24:35.849209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.155 [2024-11-20 16:24:35.857604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.155 [2024-11-20 16:24:35.857836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.155 [2024-11-20 16:24:35.857851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.155 [2024-11-20 16:24:35.866305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.155 [2024-11-20 16:24:35.866449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.155 [2024-11-20 16:24:35.866463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.155 [2024-11-20 16:24:35.874996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.155 [2024-11-20 16:24:35.875139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.155 [2024-11-20 16:24:35.875154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.155 [2024-11-20 16:24:35.883709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.155 [2024-11-20 16:24:35.884026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.155 [2024-11-20 16:24:35.884042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.155 [2024-11-20 16:24:35.892443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.155 [2024-11-20 16:24:35.892754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.155 [2024-11-20 16:24:35.892770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.155 [2024-11-20 16:24:35.901156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.155 [2024-11-20 16:24:35.901454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.155 [2024-11-20 16:24:35.901470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.155 [2024-11-20 16:24:35.909868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.155 [2024-11-20 16:24:35.910170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.155 [2024-11-20 16:24:35.910186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.155 [2024-11-20 16:24:35.918564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.155 [2024-11-20 16:24:35.918712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.155 [2024-11-20 16:24:35.918727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.155 [2024-11-20 16:24:35.927285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.155 [2024-11-20 16:24:35.927603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.155 [2024-11-20 16:24:35.927618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.155 [2024-11-20 16:24:35.936071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.155 [2024-11-20 16:24:35.936213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.155 [2024-11-20 16:24:35.936229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.155 [2024-11-20 16:24:35.944779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.155 [2024-11-20 16:24:35.944901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.155 [2024-11-20 16:24:35.944916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.155 [2024-11-20 16:24:35.953501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.155 [2024-11-20 16:24:35.953772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.155 [2024-11-20 16:24:35.953787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.155 [2024-11-20 16:24:35.962190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.155 [2024-11-20 16:24:35.962406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.155 [2024-11-20 16:24:35.962421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.156 [2024-11-20 16:24:35.970899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.156 [2024-11-20 16:24:35.971189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.156 [2024-11-20 16:24:35.971204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.156 [2024-11-20 16:24:35.979630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.156 [2024-11-20 16:24:35.979804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.156 [2024-11-20 16:24:35.979819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.156 [2024-11-20 16:24:35.988334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.156 [2024-11-20 16:24:35.988501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.156 [2024-11-20 16:24:35.988516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.156 [2024-11-20 16:24:35.997019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.156 [2024-11-20 16:24:35.997289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.156 [2024-11-20 16:24:35.997304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.156 [2024-11-20 16:24:36.005725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.156 [2024-11-20 16:24:36.005845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.156 [2024-11-20 16:24:36.005860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.156 [2024-11-20 16:24:36.014410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.156 [2024-11-20 16:24:36.014557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.156 [2024-11-20 16:24:36.014572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.156 [2024-11-20 16:24:36.023150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.156 [2024-11-20 16:24:36.023534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.156 [2024-11-20 16:24:36.023549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.156 [2024-11-20 16:24:36.031850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.156 [2024-11-20 16:24:36.032116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.156 [2024-11-20 16:24:36.032134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.156 [2024-11-20 16:24:36.040577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.156 [2024-11-20 16:24:36.040788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.156 [2024-11-20 16:24:36.040803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.156 [2024-11-20 16:24:36.049283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.156 [2024-11-20 16:24:36.049416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.156 [2024-11-20 16:24:36.049431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.156 [2024-11-20 16:24:36.057980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.156 [2024-11-20 16:24:36.058098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.156 [2024-11-20 16:24:36.058112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.156 [2024-11-20 16:24:36.066689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.156 [2024-11-20 16:24:36.066815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.156 [2024-11-20 16:24:36.066830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.156 [2024-11-20 16:24:36.075395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.156 [2024-11-20 16:24:36.075642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.156 [2024-11-20 16:24:36.075659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.156 [2024-11-20 16:24:36.084110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.156 [2024-11-20 16:24:36.084360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.156 [2024-11-20 16:24:36.084377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.092827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.093049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.093064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.101540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.101765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.101780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.110250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.110482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.110497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.118973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.119241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.119257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.127699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.127847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.127863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.136420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.136746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.136762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.145117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.145388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.145404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.153824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.154120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.154136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.162533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.162679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.162694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.171254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.171477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.171491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.179964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.180089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.180104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.188673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.188975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.188991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.197367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.197638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.197654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.206092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.206232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.206247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.214837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.215055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.215070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.223566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.223886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.223901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.232267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.232491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.232506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.240947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.241077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.241092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.249632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.249911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.249926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.258354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.258672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.258690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.267056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.267182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.267197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.275725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.275984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.275998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.284436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.284573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.284588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.293113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.293417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.293433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.301814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.302128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.302144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.310512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.310703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.310718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.319190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.319473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.319489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.327889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.328026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.328042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.336592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.336794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.336809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.418 [2024-11-20 16:24:36.345305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.418 [2024-11-20 16:24:36.345582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.418 [2024-11-20 16:24:36.345598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.686 [2024-11-20 16:24:36.354051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.686 [2024-11-20 16:24:36.354287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.686 [2024-11-20 16:24:36.354303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.686 [2024-11-20 16:24:36.362778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.686 [2024-11-20 16:24:36.363013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.686 [2024-11-20 16:24:36.363028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.686 [2024-11-20 16:24:36.371460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.686 [2024-11-20 16:24:36.371631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.686 [2024-11-20 16:24:36.371646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.686 [2024-11-20 16:24:36.380195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.686 [2024-11-20 16:24:36.380383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.686 [2024-11-20 16:24:36.380398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.686 [2024-11-20 16:24:36.388881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.686 [2024-11-20 16:24:36.389038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.686 [2024-11-20 16:24:36.389053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.686 [2024-11-20 16:24:36.397633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.686 [2024-11-20 16:24:36.397803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.686 [2024-11-20 16:24:36.397817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.686 [2024-11-20 16:24:36.406332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.686 [2024-11-20 16:24:36.406577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.686 [2024-11-20 16:24:36.406592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.686 [2024-11-20 16:24:36.415032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.686 [2024-11-20 16:24:36.415203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.686 [2024-11-20 16:24:36.415218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.686 [2024-11-20 16:24:36.423703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.687 [2024-11-20 16:24:36.423915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.687 [2024-11-20 16:24:36.423930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.687 [2024-11-20 16:24:36.432412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.687 [2024-11-20 16:24:36.432581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.687 [2024-11-20 16:24:36.432595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.687 [2024-11-20 16:24:36.441104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.687 [2024-11-20 16:24:36.441277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.687 [2024-11-20 16:24:36.441292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.687 [2024-11-20 16:24:36.449782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.687 [2024-11-20 16:24:36.450098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.687 [2024-11-20 16:24:36.450113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.687 [2024-11-20 16:24:36.458492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.687 [2024-11-20 16:24:36.458735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.687 [2024-11-20 16:24:36.458750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.687 [2024-11-20 16:24:36.467208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.687 [2024-11-20 16:24:36.467425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.687 [2024-11-20 16:24:36.467440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.687 [2024-11-20 16:24:36.475903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.687 [2024-11-20 16:24:36.476174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.687 [2024-11-20 16:24:36.476190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.687 [2024-11-20 16:24:36.484578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.687 [2024-11-20 16:24:36.484847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.687 [2024-11-20 16:24:36.484866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.688 [2024-11-20 16:24:36.493315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.688 [2024-11-20 16:24:36.493496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.688 [2024-11-20 16:24:36.493511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.688 [2024-11-20 16:24:36.502034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.688 [2024-11-20 16:24:36.502263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.688 [2024-11-20 16:24:36.502278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.688 [2024-11-20 16:24:36.510761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.688 [2024-11-20 16:24:36.511028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.688 [2024-11-20 16:24:36.511044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.688 29287.00 IOPS, 114.40 MiB/s [2024-11-20T15:24:36.624Z] [2024-11-20 16:24:36.519454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba13d0) with pdu=0x2000166fe2e8 00:30:00.688 [2024-11-20 16:24:36.519712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.688 [2024-11-20 16:24:36.519726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:00.688 00:30:00.688 Latency(us) 00:30:00.688 [2024-11-20T15:24:36.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.688 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:00.688 nvme0n1 : 2.00 29285.95 114.40 0.00 0.00 4363.77 2621.44 9011.20 00:30:00.688 [2024-11-20T15:24:36.624Z] =================================================================================================================== 00:30:00.688 [2024-11-20T15:24:36.624Z] Total : 29285.95 114.40 0.00 0.00 4363.77 2621.44 9011.20 00:30:00.688 { 00:30:00.688 "results": [ 00:30:00.688 { 00:30:00.688 "job": "nvme0n1", 00:30:00.688 "core_mask": "0x2", 00:30:00.688 "workload": "randwrite", 00:30:00.688 "status": "finished", 00:30:00.688 "queue_depth": 128, 00:30:00.688 "io_size": 4096, 00:30:00.688 "runtime": 2.004169, 00:30:00.688 "iops": 29285.95343007501, 00:30:00.688 "mibps": 114.3982555862305, 00:30:00.688 "io_failed": 0, 00:30:00.688 "io_timeout": 0, 00:30:00.688 "avg_latency_us": 4363.772609579627, 00:30:00.688 "min_latency_us": 2621.44, 00:30:00.688 "max_latency_us": 9011.2 00:30:00.688 } 00:30:00.688 ], 00:30:00.688 "core_count": 1 00:30:00.689 } 00:30:00.689 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:00.689 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:00.689 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:00.689 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:00.689 | .driver_specific 00:30:00.689 | .nvme_error 00:30:00.689 | .status_code 00:30:00.689 | .command_transient_transport_error' 00:30:00.999 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 230 > 0 )) 00:30:00.999 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1459499 00:30:00.999 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1459499 ']' 00:30:00.999 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1459499 00:30:00.999 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:00.999 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:00.999 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1459499 00:30:00.999 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:00.999 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:00.999 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1459499' 00:30:00.999 killing process with pid 1459499 00:30:00.999 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1459499 00:30:00.999 Received shutdown signal, test time was about 2.000000 seconds 00:30:00.999 00:30:00.999 Latency(us) 00:30:00.999 [2024-11-20T15:24:36.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.999 [2024-11-20T15:24:36.935Z] =================================================================================================================== 00:30:00.999 [2024-11-20T15:24:36.935Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:00.999 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1459499 00:30:01.000 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:01.000 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:01.000 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:01.000 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:01.000 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:01.000 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1460186 00:30:01.000 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1460186 /var/tmp/bperf.sock 00:30:01.000 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1460186 ']' 00:30:01.000 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:01.000 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:01.000 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:01.000 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:01.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:01.000 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:01.000 16:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:01.316 [2024-11-20 16:24:36.969029] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:30:01.316 [2024-11-20 16:24:36.969086] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1460186 ] 00:30:01.316 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:01.316 Zero copy mechanism will not be used. 00:30:01.316 [2024-11-20 16:24:37.053712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.316 [2024-11-20 16:24:37.082844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.890 16:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:01.890 16:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:01.890 16:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:01.890 16:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:02.151 16:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:02.151 16:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.151 16:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:02.151 16:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.151 16:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:02.151 16:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:02.412 nvme0n1 00:30:02.675 16:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:02.675 16:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.675 16:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:02.675 16:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.675 16:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:02.675 16:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:02.675 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:02.675 Zero copy mechanism will not be used. 00:30:02.675 Running I/O for 2 seconds... 00:30:02.675 [2024-11-20 16:24:38.466380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.675 [2024-11-20 16:24:38.466468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.675 [2024-11-20 16:24:38.466494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.675 [2024-11-20 16:24:38.472407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.675 [2024-11-20 16:24:38.472519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.675 [2024-11-20 16:24:38.472537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.675 [2024-11-20 16:24:38.478025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.675 [2024-11-20 16:24:38.478203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.675 [2024-11-20 16:24:38.478220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.676 [2024-11-20 16:24:38.483790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.676 [2024-11-20 16:24:38.484006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.676 [2024-11-20 16:24:38.484022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.676 [2024-11-20 16:24:38.489480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.676 [2024-11-20 16:24:38.489604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.676 [2024-11-20 16:24:38.489620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.676 [2024-11-20 16:24:38.495008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.676 [2024-11-20 16:24:38.495140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.676 [2024-11-20 16:24:38.495155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.676 [2024-11-20 16:24:38.500401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.676 [2024-11-20 16:24:38.500468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.676 [2024-11-20 16:24:38.500483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.676 [2024-11-20 16:24:38.505642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.676 [2024-11-20 16:24:38.505847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.676 [2024-11-20 16:24:38.505863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.676 [2024-11-20 16:24:38.512757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.676 [2024-11-20 16:24:38.513028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.676 [2024-11-20 16:24:38.513044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.676 [2024-11-20 16:24:38.517914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.676 [2024-11-20 16:24:38.518176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.676 [2024-11-20 16:24:38.518194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.676 [2024-11-20 16:24:38.522964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.676 [2024-11-20 16:24:38.523176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.676 [2024-11-20 16:24:38.523191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.676 [2024-11-20 16:24:38.529097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.676 [2024-11-20 16:24:38.529348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.676 [2024-11-20 16:24:38.529367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.676 [2024-11-20 16:24:38.536905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.676 [2024-11-20 16:24:38.537150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.676 [2024-11-20 16:24:38.537173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.676 [2024-11-20 16:24:38.545371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.676 [2024-11-20 16:24:38.545562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.676 [2024-11-20 16:24:38.545578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.676 [2024-11-20 16:24:38.551723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.676 [2024-11-20 16:24:38.551924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.676 [2024-11-20 16:24:38.551940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.676 [2024-11-20 16:24:38.557209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.676 [2024-11-20 16:24:38.557409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.676 [2024-11-20 16:24:38.557425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.676 [2024-11-20 16:24:38.563079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.676 [2024-11-20 16:24:38.563285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.676 [2024-11-20 16:24:38.563302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.676 [2024-11-20 16:24:38.571648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.676 [2024-11-20 16:24:38.571844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.676 [2024-11-20 16:24:38.571860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.676 [2024-11-20 16:24:38.576770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.676 [2024-11-20 16:24:38.576975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.676 [2024-11-20 16:24:38.576992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.676 [2024-11-20 16:24:38.581754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.676 [2024-11-20 16:24:38.581954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.676 [2024-11-20 16:24:38.581970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.676 [2024-11-20 16:24:38.587479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.676 [2024-11-20 16:24:38.587650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.676 [2024-11-20 16:24:38.587666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.676 [2024-11-20 16:24:38.592065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.676 [2024-11-20 16:24:38.592230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.676 [2024-11-20 16:24:38.592246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.676 [2024-11-20 16:24:38.596407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.676 [2024-11-20 16:24:38.596563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.676 [2024-11-20 16:24:38.596579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.676 [2024-11-20 16:24:38.600314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.676 [2024-11-20 16:24:38.600470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.676 [2024-11-20 16:24:38.600486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.676 [2024-11-20 16:24:38.604143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.676 [2024-11-20 16:24:38.604308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.676 [2024-11-20 16:24:38.604325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.607938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.608094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.608110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.611892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.612047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.612063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.615775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.615931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.615947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.619814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.619969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.619985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.624032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.624330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.624347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.627905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.628061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.628077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.632397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.632552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.632568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.636395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.636571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.636587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.644082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.644312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.644328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.653350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.653418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.653433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.659949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.660002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.660017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.665095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.665223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.665239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.671019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.671114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.671132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.677394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.677520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.677536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.683202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.683350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.683365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.689311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.689431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.689446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.694579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.694896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.694912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.701350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.701462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.701477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.705792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.705895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.705911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.709987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.710075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.710090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.714268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.714404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.714419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.718063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.718152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.718172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.940 [2024-11-20 16:24:38.722093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.940 [2024-11-20 16:24:38.722163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.940 [2024-11-20 16:24:38.722179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.726932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.727118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.727133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.731751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.731801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.731816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.735140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.735201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.735216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.738622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.738683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.738698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.742022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.742098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.742113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.745425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.745482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.745497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.748594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.748638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.748653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.751591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.751633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.751648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.754605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.754686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.754701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.757482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.757527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.757542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.760240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.760290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.760305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.762766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.762833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.762848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.765653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.765727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.765742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.769523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.769791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.769808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.777927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.778005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.778021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.785819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.786064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.786082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.791177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.791244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.791260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.794634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.794698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.794714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.798137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.798201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.798216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.801515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.801566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.801581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.804975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.805018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.805032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.808114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.808166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.808182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.812013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.812256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.812271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.815888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.815945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.815960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.818848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.818904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.818919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.821821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.821870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.821885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.941 [2024-11-20 16:24:38.825100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.941 [2024-11-20 16:24:38.825147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.941 [2024-11-20 16:24:38.825167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.942 [2024-11-20 16:24:38.827974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.942 [2024-11-20 16:24:38.828023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.942 [2024-11-20 16:24:38.828038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.942 [2024-11-20 16:24:38.830751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.942 [2024-11-20 16:24:38.830791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.942 [2024-11-20 16:24:38.830806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.942 [2024-11-20 16:24:38.833674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.942 [2024-11-20 16:24:38.833720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.942 [2024-11-20 16:24:38.833735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.942 [2024-11-20 16:24:38.837726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.942 [2024-11-20 16:24:38.837788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.942 [2024-11-20 16:24:38.837804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.942 [2024-11-20 16:24:38.841745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.942 [2024-11-20 16:24:38.841797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.942 [2024-11-20 16:24:38.841813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.942 [2024-11-20 16:24:38.844568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.942 [2024-11-20 16:24:38.844636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.942 [2024-11-20 16:24:38.844651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.942 [2024-11-20 16:24:38.847339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.942 [2024-11-20 16:24:38.847381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.942 [2024-11-20 16:24:38.847397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.942 [2024-11-20 16:24:38.849972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.942 [2024-11-20 16:24:38.850026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.942 [2024-11-20 16:24:38.850042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.942 [2024-11-20 16:24:38.852564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.942 [2024-11-20 16:24:38.852617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.942 [2024-11-20 16:24:38.852632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.942 [2024-11-20 16:24:38.855246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.942 [2024-11-20 16:24:38.855298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.942 [2024-11-20 16:24:38.855313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.942 [2024-11-20 16:24:38.858075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.942 [2024-11-20 16:24:38.858139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.942 [2024-11-20 16:24:38.858154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.942 [2024-11-20 16:24:38.860833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.942 [2024-11-20 16:24:38.860934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.942 [2024-11-20 16:24:38.860949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.942 [2024-11-20 16:24:38.864269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.942 [2024-11-20 16:24:38.864371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.942 [2024-11-20 16:24:38.864386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.942 [2024-11-20 16:24:38.871621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:02.942 [2024-11-20 16:24:38.871859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.942 [2024-11-20 16:24:38.871874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.204 [2024-11-20 16:24:38.882558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.204 [2024-11-20 16:24:38.882813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.204 [2024-11-20 16:24:38.882833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.204 [2024-11-20 16:24:38.891605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.204 [2024-11-20 16:24:38.891842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.204 [2024-11-20 16:24:38.891858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.204 [2024-11-20 16:24:38.901650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.204 [2024-11-20 16:24:38.901801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.204 [2024-11-20 16:24:38.901816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.204 [2024-11-20 16:24:38.910031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.204 [2024-11-20 16:24:38.910285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.204 [2024-11-20 16:24:38.910300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.204 [2024-11-20 16:24:38.918901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.204 [2024-11-20 16:24:38.919021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.204 [2024-11-20 16:24:38.919036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.204 [2024-11-20 16:24:38.927987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.204 [2024-11-20 16:24:38.928325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.204 [2024-11-20 16:24:38.928341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.204 [2024-11-20 16:24:38.937259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.204 [2024-11-20 16:24:38.937510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.204 [2024-11-20 16:24:38.937526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.204 [2024-11-20 16:24:38.945659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.204 [2024-11-20 16:24:38.945744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.204 [2024-11-20 16:24:38.945759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.204 [2024-11-20 16:24:38.950726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.204 [2024-11-20 16:24:38.950828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.204 [2024-11-20 16:24:38.950843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.204 [2024-11-20 16:24:38.955833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.204 [2024-11-20 16:24:38.956092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.204 [2024-11-20 16:24:38.956108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.204 [2024-11-20 16:24:38.961827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.204 [2024-11-20 16:24:38.962020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.204 [2024-11-20 16:24:38.962035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.204 [2024-11-20 16:24:38.966654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.204 [2024-11-20 16:24:38.966780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.204 [2024-11-20 16:24:38.966795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.204 [2024-11-20 16:24:38.971476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.204 [2024-11-20 16:24:38.971566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.204 [2024-11-20 16:24:38.971581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.204 [2024-11-20 16:24:38.976529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.204 [2024-11-20 16:24:38.976813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.204 [2024-11-20 16:24:38.976829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.204 [2024-11-20 16:24:38.982319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.204 [2024-11-20 16:24:38.982537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:38.982552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:38.987352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:38.987495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:38.987511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:38.991048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:38.991149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:38.991170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:38.995043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:38.995157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:38.995177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:38.999121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:38.999239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:38.999255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.003272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.003340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:39.003356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.007219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.007357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:39.007372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.015471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.015785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:39.015801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.023479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.023599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:39.023614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.030091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.030323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:39.030338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.037406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.037669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:39.037686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.042075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.042146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:39.042167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.049247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.049471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:39.049489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.057446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.057585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:39.057601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.062453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.062583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:39.062598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.066859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.066939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:39.066954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.074066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.074396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:39.074412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.081310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.081458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:39.081474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.086329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.086468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:39.086483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.090554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.090693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:39.090708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.096252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.096432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:39.096446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.102773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.102898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:39.102913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.107534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.107675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:39.107690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.111664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.111797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:39.111813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.115974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.116138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:39.116153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.121843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.122167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:39.122183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.126369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.126457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.205 [2024-11-20 16:24:39.126472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.205 [2024-11-20 16:24:39.130296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.205 [2024-11-20 16:24:39.130400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.206 [2024-11-20 16:24:39.130415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.206 [2024-11-20 16:24:39.134069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.206 [2024-11-20 16:24:39.134122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.206 [2024-11-20 16:24:39.134137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.472 [2024-11-20 16:24:39.137556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.472 [2024-11-20 16:24:39.137618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.472 [2024-11-20 16:24:39.137633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.472 [2024-11-20 16:24:39.140918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.472 [2024-11-20 16:24:39.140968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.472 [2024-11-20 16:24:39.140983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.472 [2024-11-20 16:24:39.144198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.472 [2024-11-20 16:24:39.144274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.472 [2024-11-20 16:24:39.144289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.472 [2024-11-20 16:24:39.147268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.472 [2024-11-20 16:24:39.147315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.472 [2024-11-20 16:24:39.147331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.472 [2024-11-20 16:24:39.150394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.472 [2024-11-20 16:24:39.150437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.472 [2024-11-20 16:24:39.150451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.472 [2024-11-20 16:24:39.153574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.472 [2024-11-20 16:24:39.153638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.472 [2024-11-20 16:24:39.153653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.472 [2024-11-20 16:24:39.156621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.472 [2024-11-20 16:24:39.156663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.472 [2024-11-20 16:24:39.156678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.472 [2024-11-20 16:24:39.159731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.472 [2024-11-20 16:24:39.159775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.472 [2024-11-20 16:24:39.159790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.473 [2024-11-20 16:24:39.162381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.473 [2024-11-20 16:24:39.162422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.473 [2024-11-20 16:24:39.162437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.473 [2024-11-20 16:24:39.164957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.473 [2024-11-20 16:24:39.165006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.473 [2024-11-20 16:24:39.165025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.473 [2024-11-20 16:24:39.167430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.473 [2024-11-20 16:24:39.167476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.473 [2024-11-20 16:24:39.167491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.473 [2024-11-20 16:24:39.170077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.473 [2024-11-20 16:24:39.170129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.473 [2024-11-20 16:24:39.170144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.473 [2024-11-20 16:24:39.172818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.473 [2024-11-20 16:24:39.172879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.473 [2024-11-20 16:24:39.172894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.473 [2024-11-20 16:24:39.175621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.473 [2024-11-20 16:24:39.175697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.473 [2024-11-20 16:24:39.175712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.473 [2024-11-20 16:24:39.178271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.473 [2024-11-20 16:24:39.178332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.473 [2024-11-20 16:24:39.178347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.473 [2024-11-20 16:24:39.180927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.473 [2024-11-20 16:24:39.180980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.473 [2024-11-20 16:24:39.180996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.473 [2024-11-20 16:24:39.183425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.473 [2024-11-20 16:24:39.183466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.473 [2024-11-20 16:24:39.183482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.473 [2024-11-20 16:24:39.185863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.474 [2024-11-20 16:24:39.185925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.474 [2024-11-20 16:24:39.185940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.474 [2024-11-20 16:24:39.188975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.474 [2024-11-20 16:24:39.189075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.474 [2024-11-20 16:24:39.189090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.474 [2024-11-20 16:24:39.192494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.474 [2024-11-20 16:24:39.192566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.474 [2024-11-20 16:24:39.192582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.474 [2024-11-20 16:24:39.198768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.474 [2024-11-20 16:24:39.198844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.474 [2024-11-20 16:24:39.198859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.474 [2024-11-20 16:24:39.202402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.474 [2024-11-20 16:24:39.202515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.474 [2024-11-20 16:24:39.202530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.474 [2024-11-20 16:24:39.205496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.474 [2024-11-20 16:24:39.205572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.474 [2024-11-20 16:24:39.205587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.474 [2024-11-20 16:24:39.208675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.474 [2024-11-20 16:24:39.208760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.474 [2024-11-20 16:24:39.208775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.474 [2024-11-20 16:24:39.211997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.474 [2024-11-20 16:24:39.212048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.474 [2024-11-20 16:24:39.212063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.474 [2024-11-20 16:24:39.215008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.474 [2024-11-20 16:24:39.215105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.474 [2024-11-20 16:24:39.215119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.219365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.219434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.219449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.224531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.224643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.224658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.231581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.231718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.231734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.239060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.239177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.239192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.244478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.244575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.244590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.249201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.249313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.249329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.253666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.253740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.253755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.258536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.258633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.258648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.262854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.263025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.263040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.270286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.270517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.270535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.275664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.275752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.275768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.279433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.279566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.279581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.283618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.283690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.283705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.287715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.287854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.287869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.292638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.292817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.292832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.298269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.298512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.298529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.306692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.306774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.306789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.310876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.310947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.310962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.315017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.315116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.315131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.319091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.319193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.319208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.322983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.323117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.323133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.326223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.326341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.326356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.329692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.329769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.329784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.334549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.334650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.334665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.340374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.340479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.340494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.345654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.345906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.345922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.349610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.349684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.349700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.352514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.352589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.352604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.355391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.355463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.355478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.358416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.358498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.358513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.361422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.361498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.475 [2024-11-20 16:24:39.361513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.475 [2024-11-20 16:24:39.364428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.475 [2024-11-20 16:24:39.364494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.476 [2024-11-20 16:24:39.364510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.476 [2024-11-20 16:24:39.367396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.476 [2024-11-20 16:24:39.367469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.476 [2024-11-20 16:24:39.367484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.476 [2024-11-20 16:24:39.370414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.476 [2024-11-20 16:24:39.370491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.476 [2024-11-20 16:24:39.370506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.476 [2024-11-20 16:24:39.372931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.476 [2024-11-20 16:24:39.372997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.476 [2024-11-20 16:24:39.373013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.476 [2024-11-20 16:24:39.375383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.476 [2024-11-20 16:24:39.375448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.476 [2024-11-20 16:24:39.375467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.476 [2024-11-20 16:24:39.377848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.476 [2024-11-20 16:24:39.377912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.476 [2024-11-20 16:24:39.377927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.476 [2024-11-20 16:24:39.380318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.476 [2024-11-20 16:24:39.380381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.476 [2024-11-20 16:24:39.380397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.476 [2024-11-20 16:24:39.382740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.476 [2024-11-20 16:24:39.382804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.476 [2024-11-20 16:24:39.382819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.476 [2024-11-20 16:24:39.385549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.476 [2024-11-20 16:24:39.385629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.476 [2024-11-20 16:24:39.385644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.476 [2024-11-20 16:24:39.391510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.476 [2024-11-20 16:24:39.391698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.476 [2024-11-20 16:24:39.391714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.476 [2024-11-20 16:24:39.397847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.476 [2024-11-20 16:24:39.397950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.476 [2024-11-20 16:24:39.397965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.476 [2024-11-20 16:24:39.400955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.742 [2024-11-20 16:24:39.401084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.742 [2024-11-20 16:24:39.401100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.742 [2024-11-20 16:24:39.404428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.742 [2024-11-20 16:24:39.404502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.742 [2024-11-20 16:24:39.404517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.742 [2024-11-20 16:24:39.408601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.742 [2024-11-20 16:24:39.408738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.742 [2024-11-20 16:24:39.408753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.742 [2024-11-20 16:24:39.416806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.742 [2024-11-20 16:24:39.416974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.742 [2024-11-20 16:24:39.416990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.742 [2024-11-20 16:24:39.422916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.742 [2024-11-20 16:24:39.423059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.742 [2024-11-20 16:24:39.423074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.742 [2024-11-20 16:24:39.427876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.742 [2024-11-20 16:24:39.427974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.742 [2024-11-20 16:24:39.427989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.742 [2024-11-20 16:24:39.432727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.742 [2024-11-20 16:24:39.432838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.742 [2024-11-20 16:24:39.432853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.742 [2024-11-20 16:24:39.437183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.742 [2024-11-20 16:24:39.437265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.742 [2024-11-20 16:24:39.437280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.742 [2024-11-20 16:24:39.441522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.742 [2024-11-20 16:24:39.441656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.742 [2024-11-20 16:24:39.441672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.742 [2024-11-20 16:24:39.447390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.742 [2024-11-20 16:24:39.447549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.742 [2024-11-20 16:24:39.447564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.742 [2024-11-20 16:24:39.452551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.742 [2024-11-20 16:24:39.452681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.742 [2024-11-20 16:24:39.452695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.742 [2024-11-20 16:24:39.456284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.742 [2024-11-20 16:24:39.456381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.742 [2024-11-20 16:24:39.456396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.742 6562.00 IOPS, 820.25 MiB/s [2024-11-20T15:24:39.679Z] [2024-11-20 16:24:39.460952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.461100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.461115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.464759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.464860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.464875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.468327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.468427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.468442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.471935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.472035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.472050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.474824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.474923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.474939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.477543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.477640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.477656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.480295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.480392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.480407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.483038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.483141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.483167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.485783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.485888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.485904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.488387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.488481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.488496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.490826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.490921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.490936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.493265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.493359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.493374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.495669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.495763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.495778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.498113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.498212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.498228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.500544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.500638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.500653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.502960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.503055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.503070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.505382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.505482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.505497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.507792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.507885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.507901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.510214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.510307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.510323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.512623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.512715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.512730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.515011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.515104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.515120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.517422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.517574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.517589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.520141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.520264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.520279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.524068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.524268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.524283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.529114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.529337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.529353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.533282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.533373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.533389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.541907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.743 [2024-11-20 16:24:39.542090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.743 [2024-11-20 16:24:39.542105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.743 [2024-11-20 16:24:39.545831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.545922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.545937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.548559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.548616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.548631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.551017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.551063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.551078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.553517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.553561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.553576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.555971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.556014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.556029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.558451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.558496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.558511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.560930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.560985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.561003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.563396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.563462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.563477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.565866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.565910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.565925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.568278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.568328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.568343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.570831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.570876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.570891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.573402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.573448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.573463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.575847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.575887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.575902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.578445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.578485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.578501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.581380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.581513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.581528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.584379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.584423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.584438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.586967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.587008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.587024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.589500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.589540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.589556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.592075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.592124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.592139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.594572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.594623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.594638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.597206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.597248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.597263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.600186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.600227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.600242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.603207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.603249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.603264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.605859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.605902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.605916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.608427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.608478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.608493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.611024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.611078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.611094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.613740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.744 [2024-11-20 16:24:39.613784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.744 [2024-11-20 16:24:39.613799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.744 [2024-11-20 16:24:39.616410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.745 [2024-11-20 16:24:39.616461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.745 [2024-11-20 16:24:39.616475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.745 [2024-11-20 16:24:39.619135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.745 [2024-11-20 16:24:39.619192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.745 [2024-11-20 16:24:39.619207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.745 [2024-11-20 16:24:39.621915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.745 [2024-11-20 16:24:39.621973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.745 [2024-11-20 16:24:39.621988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.745 [2024-11-20 16:24:39.624664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.745 [2024-11-20 16:24:39.624716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.745 [2024-11-20 16:24:39.624731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.745 [2024-11-20 16:24:39.627394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.745 [2024-11-20 16:24:39.627473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.745 [2024-11-20 16:24:39.627488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.745 [2024-11-20 16:24:39.630176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.745 [2024-11-20 16:24:39.630248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.745 [2024-11-20 16:24:39.630266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.745 [2024-11-20 16:24:39.632842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.745 [2024-11-20 16:24:39.632890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.745 [2024-11-20 16:24:39.632905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.745 [2024-11-20 16:24:39.635306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.745 [2024-11-20 16:24:39.635359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.745 [2024-11-20 16:24:39.635374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.745 [2024-11-20 16:24:39.637739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.745 [2024-11-20 16:24:39.637794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.745 [2024-11-20 16:24:39.637809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.745 [2024-11-20 16:24:39.640267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.745 [2024-11-20 16:24:39.640325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.745 [2024-11-20 16:24:39.640339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.745 [2024-11-20 16:24:39.642726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.745 [2024-11-20 16:24:39.642781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.745 [2024-11-20 16:24:39.642796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.745 [2024-11-20 16:24:39.645129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.745 [2024-11-20 16:24:39.645185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.745 [2024-11-20 16:24:39.645200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.745 [2024-11-20 16:24:39.647763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.745 [2024-11-20 16:24:39.647843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.745 [2024-11-20 16:24:39.647859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.745 [2024-11-20 16:24:39.651362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.745 [2024-11-20 16:24:39.651449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.745 [2024-11-20 16:24:39.651465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.745 [2024-11-20 16:24:39.657768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.745 [2024-11-20 16:24:39.657939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.745 [2024-11-20 16:24:39.657955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.745 [2024-11-20 16:24:39.661710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.745 [2024-11-20 16:24:39.661808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.745 [2024-11-20 16:24:39.661823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.745 [2024-11-20 16:24:39.664801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.745 [2024-11-20 16:24:39.664878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.745 [2024-11-20 16:24:39.664893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.745 [2024-11-20 16:24:39.667723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.745 [2024-11-20 16:24:39.667791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.745 [2024-11-20 16:24:39.667806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.745 [2024-11-20 16:24:39.670730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:03.745 [2024-11-20 16:24:39.670815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.745 [2024-11-20 16:24:39.670830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.008 [2024-11-20 16:24:39.676331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.008 [2024-11-20 16:24:39.676602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.008 [2024-11-20 16:24:39.676618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.008 [2024-11-20 16:24:39.681392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.008 [2024-11-20 16:24:39.681474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.008 [2024-11-20 16:24:39.681489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.008 [2024-11-20 16:24:39.684510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.008 [2024-11-20 16:24:39.684593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.008 [2024-11-20 16:24:39.684608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.008 [2024-11-20 16:24:39.687607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.008 [2024-11-20 16:24:39.687716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.008 [2024-11-20 16:24:39.687731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.690884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.690968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.690983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.694171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.694220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.694235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.697445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.697500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.697515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.701981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.702062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.702077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.708914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.709025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.709040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.712453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.712520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.712535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.716255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.716329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.716344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.719846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.719925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.719940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.724668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.724758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.724776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.730271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.730336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.730351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.733909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.734012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.734027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.737387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.737452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.737467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.740928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.741022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.741037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.744454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.744515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.744530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.747994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.748079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.748094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.751501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.751579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.751594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.755028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.755147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.755168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.758617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.758704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.758719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.761891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.761950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.761965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.765461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.765573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.765588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.768699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.768790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.768805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.772615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.772706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.772720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.779400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.779644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.779659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.783902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.783992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.784007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.787665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.787735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.787750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.791440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.009 [2024-11-20 16:24:39.791548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.009 [2024-11-20 16:24:39.791563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.009 [2024-11-20 16:24:39.794636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.794686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.794701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.797576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.797657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.797672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.800619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.800663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.800679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.803552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.803602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.803617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.806237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.806298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.806312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.808733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.808779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.808794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.811186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.811237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.811252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.813628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.813673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.813688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.816033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.816084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.816102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.818457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.818498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.818513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.820885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.820934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.820950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.823304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.823364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.823378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.825886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.825936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.825951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.828981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.829031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.829046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.831970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.832024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.832039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.834464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.834504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.834519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.836937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.836989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.837003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.839475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.839529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.839544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.842013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.842062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.842077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.844685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.844726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.844741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.847249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.847289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.847304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.849715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.849763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.849778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.852273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.852324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.852338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.854882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.854929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.854944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.857446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.857496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.857511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.859961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.860002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.860018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.862586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.010 [2024-11-20 16:24:39.862638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.010 [2024-11-20 16:24:39.862652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.010 [2024-11-20 16:24:39.865219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.865277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.865292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.867758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.867805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.867820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.870329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.870375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.870390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.872882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.872928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.872943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.875458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.875506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.875521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.878026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.878075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.878090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.880503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.880548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.880564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.882953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.882996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.883014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.885481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.885547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.885562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.888436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.888480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.888495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.892031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.892090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.892105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.895216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.895281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.895296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.898012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.898061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.898077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.900862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.900930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.900945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.903400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.903452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.903467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.906252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.906354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.906369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.910006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.910075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.910091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.917011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.917104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.917119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.920464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.920565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.920580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.924139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.924218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.924233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.927528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.927591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.927606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.930825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.930907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.930921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.933985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.934034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.934049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.011 [2024-11-20 16:24:39.937453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.011 [2024-11-20 16:24:39.937549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.011 [2024-11-20 16:24:39.937564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.274 [2024-11-20 16:24:39.942279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.274 [2024-11-20 16:24:39.942380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.274 [2024-11-20 16:24:39.942395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.274 [2024-11-20 16:24:39.948369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.274 [2024-11-20 16:24:39.948433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.274 [2024-11-20 16:24:39.948448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.274 [2024-11-20 16:24:39.951716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.274 [2024-11-20 16:24:39.951798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.274 [2024-11-20 16:24:39.951813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.274 [2024-11-20 16:24:39.955230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.274 [2024-11-20 16:24:39.955365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.274 [2024-11-20 16:24:39.955380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.274 [2024-11-20 16:24:39.958689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.274 [2024-11-20 16:24:39.958811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.274 [2024-11-20 16:24:39.958826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.274 [2024-11-20 16:24:39.962007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.274 [2024-11-20 16:24:39.962087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.274 [2024-11-20 16:24:39.962102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.274 [2024-11-20 16:24:39.965726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.274 [2024-11-20 16:24:39.965829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.274 [2024-11-20 16:24:39.965844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.274 [2024-11-20 16:24:39.975840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.274 [2024-11-20 16:24:39.975942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.274 [2024-11-20 16:24:39.975958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.274 [2024-11-20 16:24:39.979214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.274 [2024-11-20 16:24:39.979342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.274 [2024-11-20 16:24:39.979357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.274 [2024-11-20 16:24:39.982760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.274 [2024-11-20 16:24:39.982830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.274 [2024-11-20 16:24:39.982851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.274 [2024-11-20 16:24:39.986402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.274 [2024-11-20 16:24:39.986528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.274 [2024-11-20 16:24:39.986544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.274 [2024-11-20 16:24:39.990135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:39.990217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:39.990232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:39.994156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:39.994285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:39.994300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:39.999169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:39.999348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:39.999363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.004836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.004897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.004914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.009939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.010026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.010041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.013570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.013665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.013681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.016966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.017056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.017070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.020289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.020362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.020377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.023722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.023824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.023840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.027090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.027183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.027198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.030319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.030407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.030421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.033988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.034077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.034092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.039810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.039912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.039927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.044020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.044114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.044130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.048168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.048224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.048239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.054043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.054130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.054145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.058229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.058293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.058308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.061853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.061909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.061924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.065477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.065540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.065555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.068372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.068426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.068441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.071182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.071251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.071266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.074036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.074116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.074131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.076938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.077006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.077021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.079759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.079815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.079830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.082326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.082379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.082396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.275 [2024-11-20 16:24:40.084792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.275 [2024-11-20 16:24:40.084845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.275 [2024-11-20 16:24:40.084860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.087226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.087289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.087303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.089701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.089758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.089773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.092113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.092183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.092198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.094521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.094582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.094597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.096929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.096981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.096997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.099355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.099409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.099424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.101801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.101872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.101886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.104209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.104273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.104288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.106640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.106692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.106707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.109140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.109199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.109214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.111934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.111988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.112003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.115168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.115219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.115234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.120608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.120846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.120862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.125559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.125621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.125636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.128646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.128708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.128724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.132124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.132201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.132216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.135549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.135607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.135623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.138566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.138629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.138644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.141687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.141757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.141773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.145060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.145118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.145133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.152156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.152227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.152242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.155756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.155814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.155829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.159211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.159269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.159284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.162464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.162522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.162537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.165742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.165794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.165813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.169775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.276 [2024-11-20 16:24:40.169875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.276 [2024-11-20 16:24:40.169891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.276 [2024-11-20 16:24:40.173040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.277 [2024-11-20 16:24:40.173127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.277 [2024-11-20 16:24:40.173142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.277 [2024-11-20 16:24:40.177036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.277 [2024-11-20 16:24:40.177104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.277 [2024-11-20 16:24:40.177120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.277 [2024-11-20 16:24:40.179946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.277 [2024-11-20 16:24:40.180010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.277 [2024-11-20 16:24:40.180025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.277 [2024-11-20 16:24:40.183359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.277 [2024-11-20 16:24:40.183436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.277 [2024-11-20 16:24:40.183452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.277 [2024-11-20 16:24:40.187173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.277 [2024-11-20 16:24:40.187302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.277 [2024-11-20 16:24:40.187317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.277 [2024-11-20 16:24:40.194196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.277 [2024-11-20 16:24:40.194470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.277 [2024-11-20 16:24:40.194493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.277 [2024-11-20 16:24:40.202412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.277 [2024-11-20 16:24:40.202651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.277 [2024-11-20 16:24:40.202667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.539 [2024-11-20 16:24:40.207277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.539 [2024-11-20 16:24:40.207375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.539 [2024-11-20 16:24:40.207390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.539 [2024-11-20 16:24:40.211913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.539 [2024-11-20 16:24:40.212026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.539 [2024-11-20 16:24:40.212042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.539 [2024-11-20 16:24:40.218725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.539 [2024-11-20 16:24:40.219012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.539 [2024-11-20 16:24:40.219028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.539 [2024-11-20 16:24:40.226780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.539 [2024-11-20 16:24:40.226983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.539 [2024-11-20 16:24:40.226998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.539 [2024-11-20 16:24:40.233549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.539 [2024-11-20 16:24:40.233823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.539 [2024-11-20 16:24:40.233840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.539 [2024-11-20 16:24:40.239923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.539 [2024-11-20 16:24:40.240001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.539 [2024-11-20 16:24:40.240016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.539 [2024-11-20 16:24:40.244366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.539 [2024-11-20 16:24:40.244499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.539 [2024-11-20 16:24:40.244513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.539 [2024-11-20 16:24:40.248448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.539 [2024-11-20 16:24:40.248619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.539 [2024-11-20 16:24:40.248634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.539 [2024-11-20 16:24:40.253523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.539 [2024-11-20 16:24:40.253713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.539 [2024-11-20 16:24:40.253728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.539 [2024-11-20 16:24:40.259171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.539 [2024-11-20 16:24:40.259571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.539 [2024-11-20 16:24:40.259587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.539 [2024-11-20 16:24:40.266896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.539 [2024-11-20 16:24:40.266996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.539 [2024-11-20 16:24:40.267011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.539 [2024-11-20 16:24:40.270802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.539 [2024-11-20 16:24:40.270868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.539 [2024-11-20 16:24:40.270883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.539 [2024-11-20 16:24:40.274883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.539 [2024-11-20 16:24:40.274973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.539 [2024-11-20 16:24:40.274988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.539 [2024-11-20 16:24:40.278992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.539 [2024-11-20 16:24:40.279084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.539 [2024-11-20 16:24:40.279099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.539 [2024-11-20 16:24:40.283206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.539 [2024-11-20 16:24:40.283274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.539 [2024-11-20 16:24:40.283288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.539 [2024-11-20 16:24:40.287015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.539 [2024-11-20 16:24:40.287097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.539 [2024-11-20 16:24:40.287112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.539 [2024-11-20 16:24:40.290239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.539 [2024-11-20 16:24:40.290320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.539 [2024-11-20 16:24:40.290335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.539 [2024-11-20 16:24:40.293654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.539 [2024-11-20 16:24:40.293850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.539 [2024-11-20 16:24:40.293868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.539 [2024-11-20 16:24:40.300287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.539 [2024-11-20 16:24:40.300583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.539 [2024-11-20 16:24:40.300600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.539 [2024-11-20 16:24:40.305235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.539 [2024-11-20 16:24:40.305373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.539 [2024-11-20 16:24:40.305388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.539 [2024-11-20 16:24:40.308967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.309057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.309072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.312656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.312739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.312755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.315971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.316115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.316130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.322001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.322194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.322209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.326599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.326688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.326703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.331026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.331098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.331113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.335508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.335608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.335624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.341398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.341636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.341652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.350078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.350241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.350256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.358164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.358466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.358482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.363066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.363131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.363146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.367196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.367256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.367271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.371199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.371302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.371316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.374665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.374712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.374727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.377782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.377858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.377873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.380931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.380975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.380991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.383737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.383796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.383812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.386392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.386435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.386451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.389079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.389134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.389148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.391604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.391649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.391664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.394093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.394135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.394150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.396760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.396803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.396818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.399289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.399340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.399356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.401892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.401932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.401950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.404528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.404576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.404591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.406951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.407011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.407026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.540 [2024-11-20 16:24:40.409483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.540 [2024-11-20 16:24:40.409523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.540 [2024-11-20 16:24:40.409538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.541 [2024-11-20 16:24:40.412011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.541 [2024-11-20 16:24:40.412149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.541 [2024-11-20 16:24:40.412168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.541 [2024-11-20 16:24:40.414954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.541 [2024-11-20 16:24:40.415052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.541 [2024-11-20 16:24:40.415067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.541 [2024-11-20 16:24:40.419007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.541 [2024-11-20 16:24:40.419052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.541 [2024-11-20 16:24:40.419068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.541 [2024-11-20 16:24:40.421555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.541 [2024-11-20 16:24:40.421604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.541 [2024-11-20 16:24:40.421620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.541 [2024-11-20 16:24:40.424153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.541 [2024-11-20 16:24:40.424203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.541 [2024-11-20 16:24:40.424217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.541 [2024-11-20 16:24:40.426788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.541 [2024-11-20 16:24:40.426841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.541 [2024-11-20 16:24:40.426856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.541 [2024-11-20 16:24:40.429448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.541 [2024-11-20 16:24:40.429502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.541 [2024-11-20 16:24:40.429517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.541 [2024-11-20 16:24:40.432131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.541 [2024-11-20 16:24:40.432177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.541 [2024-11-20 16:24:40.432192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.541 [2024-11-20 16:24:40.434695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.541 [2024-11-20 16:24:40.434736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.541 [2024-11-20 16:24:40.434751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.541 [2024-11-20 16:24:40.437294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.541 [2024-11-20 16:24:40.437337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.541 [2024-11-20 16:24:40.437352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.541 [2024-11-20 16:24:40.439943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.541 [2024-11-20 16:24:40.439996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.541 [2024-11-20 16:24:40.440012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.541 [2024-11-20 16:24:40.442500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.541 [2024-11-20 16:24:40.442540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.541 [2024-11-20 16:24:40.442555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.541 [2024-11-20 16:24:40.445102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.541 [2024-11-20 16:24:40.445150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.541 [2024-11-20 16:24:40.445169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.541 [2024-11-20 16:24:40.447711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.541 [2024-11-20 16:24:40.447769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.541 [2024-11-20 16:24:40.447784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.541 [2024-11-20 16:24:40.450128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.541 [2024-11-20 16:24:40.450173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.541 [2024-11-20 16:24:40.450188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.541 [2024-11-20 16:24:40.452690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.541 [2024-11-20 16:24:40.452749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.541 [2024-11-20 16:24:40.452764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.541 [2024-11-20 16:24:40.455346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.541 [2024-11-20 16:24:40.455405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.541 [2024-11-20 16:24:40.455420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.541 [2024-11-20 16:24:40.458648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.541 [2024-11-20 16:24:40.458704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.541 [2024-11-20 16:24:40.458719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.541 7760.50 IOPS, 970.06 MiB/s [2024-11-20T15:24:40.477Z] [2024-11-20 16:24:40.462980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba1710) with pdu=0x2000166ff3c8 00:30:04.541 [2024-11-20 16:24:40.463026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.541 [2024-11-20 16:24:40.463041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.541 00:30:04.541 Latency(us) 00:30:04.541 [2024-11-20T15:24:40.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.541 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:04.541 nvme0n1 : 2.00 7760.26 970.03 0.00 0.00 2058.61 976.21 12615.68 00:30:04.541 [2024-11-20T15:24:40.477Z] =================================================================================================================== 00:30:04.541 [2024-11-20T15:24:40.477Z] Total : 7760.26 970.03 0.00 0.00 2058.61 976.21 12615.68 00:30:04.541 { 00:30:04.541 "results": [ 00:30:04.541 { 00:30:04.541 "job": "nvme0n1", 00:30:04.541 "core_mask": "0x2", 00:30:04.541 "workload": "randwrite", 00:30:04.541 "status": "finished", 00:30:04.541 "queue_depth": 16, 00:30:04.541 "io_size": 131072, 00:30:04.541 "runtime": 2.002769, 00:30:04.541 "iops": 7760.255925670908, 00:30:04.541 "mibps": 970.0319907088635, 00:30:04.541 "io_failed": 0, 00:30:04.541 "io_timeout": 0, 00:30:04.541 "avg_latency_us": 2058.6076438038863, 00:30:04.541 "min_latency_us": 976.2133333333334, 00:30:04.541 "max_latency_us": 12615.68 00:30:04.541 } 00:30:04.541 ], 00:30:04.541 "core_count": 1 00:30:04.541 } 00:30:04.802 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:04.802 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:04.802 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:04.802 | .driver_specific 00:30:04.802 | .nvme_error 00:30:04.802 | .status_code 00:30:04.802 | .command_transient_transport_error' 00:30:04.802 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:04.802 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 502 > 0 )) 00:30:04.802 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1460186 00:30:04.802 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1460186 ']' 00:30:04.802 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1460186 00:30:04.803 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:04.803 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:04.803 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1460186 00:30:05.064 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:05.064 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:05.064 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1460186' 00:30:05.064 killing process with pid 1460186 00:30:05.064 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1460186 00:30:05.064 Received shutdown signal, test time was about 2.000000 seconds 00:30:05.064 00:30:05.064 Latency(us) 00:30:05.064 [2024-11-20T15:24:41.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.064 [2024-11-20T15:24:41.000Z] =================================================================================================================== 00:30:05.064 [2024-11-20T15:24:41.000Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:05.064 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1460186 00:30:05.064 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1457786 00:30:05.064 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1457786 ']' 00:30:05.064 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1457786 00:30:05.064 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:05.064 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:05.064 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1457786 00:30:05.064 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:05.064 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:05.064 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1457786' 00:30:05.064 killing process with pid 1457786 00:30:05.064 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1457786 00:30:05.064 16:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1457786 00:30:05.326 00:30:05.326 real 0m16.468s 00:30:05.326 user 0m32.570s 00:30:05.326 sys 0m3.675s 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:05.326 ************************************ 00:30:05.326 END TEST nvmf_digest_error 00:30:05.326 ************************************ 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:05.326 rmmod nvme_tcp 00:30:05.326 rmmod nvme_fabrics 00:30:05.326 rmmod nvme_keyring 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1457786 ']' 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1457786 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1457786 ']' 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1457786 00:30:05.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1457786) - No such process 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1457786 is not found' 00:30:05.326 Process with pid 1457786 is not found 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:05.326 16:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:07.871 00:30:07.871 real 0m42.750s 00:30:07.871 user 1m6.575s 00:30:07.871 sys 0m13.299s 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:07.871 ************************************ 00:30:07.871 END TEST nvmf_digest 00:30:07.871 ************************************ 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.871 ************************************ 00:30:07.871 START TEST nvmf_bdevperf 00:30:07.871 ************************************ 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:07.871 * Looking for test storage... 00:30:07.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:07.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.871 --rc genhtml_branch_coverage=1 00:30:07.871 --rc genhtml_function_coverage=1 00:30:07.871 --rc genhtml_legend=1 00:30:07.871 --rc geninfo_all_blocks=1 00:30:07.871 --rc geninfo_unexecuted_blocks=1 00:30:07.871 00:30:07.871 ' 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:07.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.871 --rc genhtml_branch_coverage=1 00:30:07.871 --rc genhtml_function_coverage=1 00:30:07.871 --rc genhtml_legend=1 00:30:07.871 --rc geninfo_all_blocks=1 00:30:07.871 --rc geninfo_unexecuted_blocks=1 00:30:07.871 00:30:07.871 ' 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:07.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.871 --rc genhtml_branch_coverage=1 00:30:07.871 --rc genhtml_function_coverage=1 00:30:07.871 --rc genhtml_legend=1 00:30:07.871 --rc geninfo_all_blocks=1 00:30:07.871 --rc geninfo_unexecuted_blocks=1 00:30:07.871 00:30:07.871 ' 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:07.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.871 --rc genhtml_branch_coverage=1 00:30:07.871 --rc genhtml_function_coverage=1 00:30:07.871 --rc genhtml_legend=1 00:30:07.871 --rc geninfo_all_blocks=1 00:30:07.871 --rc geninfo_unexecuted_blocks=1 00:30:07.871 00:30:07.871 ' 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:07.871 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:07.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:07.872 16:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:16.017 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:16.017 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:16.017 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:16.017 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:16.017 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:16.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:16.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:30:16.018 00:30:16.018 --- 10.0.0.2 ping statistics --- 00:30:16.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.018 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:16.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:16.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:30:16.018 00:30:16.018 --- 10.0.0.1 ping statistics --- 00:30:16.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.018 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1465209 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1465209 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1465209 ']' 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:16.018 16:24:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:16.018 [2024-11-20 16:24:50.811261] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:30:16.018 [2024-11-20 16:24:50.811312] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.018 [2024-11-20 16:24:50.902155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:16.018 [2024-11-20 16:24:50.932198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.018 [2024-11-20 16:24:50.932226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.018 [2024-11-20 16:24:50.932233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.018 [2024-11-20 16:24:50.932238] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.018 [2024-11-20 16:24:50.932243] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.018 [2024-11-20 16:24:50.933440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:16.018 [2024-11-20 16:24:50.933589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.018 [2024-11-20 16:24:50.933591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:16.018 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:16.018 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:16.018 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:16.019 [2024-11-20 16:24:51.657069] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:16.019 Malloc0 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:16.019 [2024-11-20 16:24:51.719397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:16.019 { 00:30:16.019 "params": { 00:30:16.019 "name": "Nvme$subsystem", 00:30:16.019 "trtype": "$TEST_TRANSPORT", 00:30:16.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.019 "adrfam": "ipv4", 00:30:16.019 "trsvcid": "$NVMF_PORT", 00:30:16.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.019 "hdgst": ${hdgst:-false}, 00:30:16.019 "ddgst": ${ddgst:-false} 00:30:16.019 }, 00:30:16.019 "method": "bdev_nvme_attach_controller" 00:30:16.019 } 00:30:16.019 EOF 00:30:16.019 )") 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:16.019 16:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:16.019 "params": { 00:30:16.019 "name": "Nvme1", 00:30:16.019 "trtype": "tcp", 00:30:16.019 "traddr": "10.0.0.2", 00:30:16.019 "adrfam": "ipv4", 00:30:16.019 "trsvcid": "4420", 00:30:16.019 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:16.019 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:16.019 "hdgst": false, 00:30:16.019 "ddgst": false 00:30:16.019 }, 00:30:16.019 "method": "bdev_nvme_attach_controller" 00:30:16.019 }' 00:30:16.019 [2024-11-20 16:24:51.774400] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:30:16.019 [2024-11-20 16:24:51.774447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465313 ] 00:30:16.019 [2024-11-20 16:24:51.861104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.019 [2024-11-20 16:24:51.897165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.281 Running I/O for 1 seconds... 00:30:17.261 11050.00 IOPS, 43.16 MiB/s 00:30:17.262 Latency(us) 00:30:17.262 [2024-11-20T15:24:53.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.262 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:17.262 Verification LBA range: start 0x0 length 0x4000 00:30:17.262 Nvme1n1 : 1.01 11071.66 43.25 0.00 0.00 11508.08 1706.67 14199.47 00:30:17.262 [2024-11-20T15:24:53.198Z] =================================================================================================================== 00:30:17.262 [2024-11-20T15:24:53.198Z] Total : 11071.66 43.25 0.00 0.00 11508.08 1706.67 14199.47 00:30:17.262 16:24:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1465585 00:30:17.262 16:24:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:17.262 16:24:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:17.262 16:24:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:17.262 16:24:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:17.262 16:24:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:17.262 16:24:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.262 16:24:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.262 { 00:30:17.262 "params": { 00:30:17.262 "name": "Nvme$subsystem", 00:30:17.262 "trtype": "$TEST_TRANSPORT", 00:30:17.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.262 "adrfam": "ipv4", 00:30:17.262 "trsvcid": "$NVMF_PORT", 00:30:17.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.262 "hdgst": ${hdgst:-false}, 00:30:17.262 "ddgst": ${ddgst:-false} 00:30:17.262 }, 00:30:17.262 "method": "bdev_nvme_attach_controller" 00:30:17.262 } 00:30:17.262 EOF 00:30:17.262 )") 00:30:17.262 16:24:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:17.262 16:24:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:17.262 16:24:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:17.262 16:24:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:17.262 "params": { 00:30:17.262 "name": "Nvme1", 00:30:17.262 "trtype": "tcp", 00:30:17.262 "traddr": "10.0.0.2", 00:30:17.262 "adrfam": "ipv4", 00:30:17.262 "trsvcid": "4420", 00:30:17.262 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:17.262 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:17.262 "hdgst": false, 00:30:17.262 "ddgst": false 00:30:17.262 }, 00:30:17.262 "method": "bdev_nvme_attach_controller" 00:30:17.262 }' 00:30:17.523 [2024-11-20 16:24:53.219200] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:30:17.523 [2024-11-20 16:24:53.219257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465585 ] 00:30:17.523 [2024-11-20 16:24:53.307789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.523 [2024-11-20 16:24:53.343209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.783 Running I/O for 15 seconds... 00:30:19.737 11155.00 IOPS, 43.57 MiB/s [2024-11-20T15:24:56.248Z] 11204.00 IOPS, 43.77 MiB/s [2024-11-20T15:24:56.248Z] 16:24:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1465209 00:30:20.312 16:24:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:20.312 [2024-11-20 16:24:56.181728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.181769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.181790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.181801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.181812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.181821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.181830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.181838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.181848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.181858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.181868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.181877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.181887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.181895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.181904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.181912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.181926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.181934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.181945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.181954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.181966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.181975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.181985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.181993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.182005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.182015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.182025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.182033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.182046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.182054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.182064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.182073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.182085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.182093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.182102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.182109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.182119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.182126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.182135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.182142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.182152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.182254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.182264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.182272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.182282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.182289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.182299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.182306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.182315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.182322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.182332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.182339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.182348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.182355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.182365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.182372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.182381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.182389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.182398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.182406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.182415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.312 [2024-11-20 16:24:56.182422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.312 [2024-11-20 16:24:56.182431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.182986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.182995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.183003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.183012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.183019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.183029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.183036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.183045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.183053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.183062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.183069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.313 [2024-11-20 16:24:56.183078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.313 [2024-11-20 16:24:56.183085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.314 [2024-11-20 16:24:56.183222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.314 [2024-11-20 16:24:56.183239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.314 [2024-11-20 16:24:56.183256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.314 [2024-11-20 16:24:56.183272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.314 [2024-11-20 16:24:56.183289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.314 [2024-11-20 16:24:56.183310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.314 [2024-11-20 16:24:56.183327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.314 [2024-11-20 16:24:56.183736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.314 [2024-11-20 16:24:56.183743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.315 [2024-11-20 16:24:56.183752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.315 [2024-11-20 16:24:56.183759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.315 [2024-11-20 16:24:56.183768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.315 [2024-11-20 16:24:56.183775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.315 [2024-11-20 16:24:56.183785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.315 [2024-11-20 16:24:56.183792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.315 [2024-11-20 16:24:56.183801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.315 [2024-11-20 16:24:56.183808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.315 [2024-11-20 16:24:56.183817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.315 [2024-11-20 16:24:56.183824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.315 [2024-11-20 16:24:56.183833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.315 [2024-11-20 16:24:56.183841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.315 [2024-11-20 16:24:56.183850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.315 [2024-11-20 16:24:56.183857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.315 [2024-11-20 16:24:56.183866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.315 [2024-11-20 16:24:56.183873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.315 [2024-11-20 16:24:56.183883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.315 [2024-11-20 16:24:56.183890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.315 [2024-11-20 16:24:56.183900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.315 [2024-11-20 16:24:56.183907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.315 [2024-11-20 16:24:56.183916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.315 [2024-11-20 16:24:56.183923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.315 [2024-11-20 16:24:56.183933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.315 [2024-11-20 16:24:56.183941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.315 [2024-11-20 16:24:56.183951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.315 [2024-11-20 16:24:56.183958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.315 [2024-11-20 16:24:56.183967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.315 [2024-11-20 16:24:56.183974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.315 [2024-11-20 16:24:56.183984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.315 [2024-11-20 16:24:56.183992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.315 [2024-11-20 16:24:56.184002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.315 [2024-11-20 16:24:56.184009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.315 [2024-11-20 16:24:56.184018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.315 [2024-11-20 16:24:56.184025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.315 [2024-11-20 16:24:56.184034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ee150 is same with the state(6) to be set 00:30:20.315 [2024-11-20 16:24:56.184043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:20.315 [2024-11-20 16:24:56.184049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:20.315 [2024-11-20 16:24:56.184056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96328 len:8 PRP1 0x0 PRP2 0x0 00:30:20.315 [2024-11-20 16:24:56.184064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.315 [2024-11-20 16:24:56.187593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.315 [2024-11-20 16:24:56.187648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.315 [2024-11-20 16:24:56.188476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.315 [2024-11-20 16:24:56.188514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.315 [2024-11-20 16:24:56.188527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.315 [2024-11-20 16:24:56.188766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.315 [2024-11-20 16:24:56.188987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.315 [2024-11-20 16:24:56.188997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.315 [2024-11-20 16:24:56.189006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.315 [2024-11-20 16:24:56.189016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.315 [2024-11-20 16:24:56.201771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.315 [2024-11-20 16:24:56.202275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.315 [2024-11-20 16:24:56.202319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.315 [2024-11-20 16:24:56.202332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.315 [2024-11-20 16:24:56.202569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.315 [2024-11-20 16:24:56.202789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.315 [2024-11-20 16:24:56.202799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.315 [2024-11-20 16:24:56.202807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.315 [2024-11-20 16:24:56.202816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.315 [2024-11-20 16:24:56.215589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.315 [2024-11-20 16:24:56.216184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.315 [2024-11-20 16:24:56.216224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.315 [2024-11-20 16:24:56.216236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.315 [2024-11-20 16:24:56.216476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.315 [2024-11-20 16:24:56.216696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.315 [2024-11-20 16:24:56.216705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.315 [2024-11-20 16:24:56.216715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.315 [2024-11-20 16:24:56.216723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.315 [2024-11-20 16:24:56.229490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.315 [2024-11-20 16:24:56.230063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.315 [2024-11-20 16:24:56.230103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.315 [2024-11-20 16:24:56.230114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.315 [2024-11-20 16:24:56.230361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.315 [2024-11-20 16:24:56.230582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.315 [2024-11-20 16:24:56.230591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.315 [2024-11-20 16:24:56.230599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.315 [2024-11-20 16:24:56.230608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.578 [2024-11-20 16:24:56.243369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.578 [2024-11-20 16:24:56.243986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-11-20 16:24:56.244028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.578 [2024-11-20 16:24:56.244039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.578 [2024-11-20 16:24:56.244292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.578 [2024-11-20 16:24:56.244514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.578 [2024-11-20 16:24:56.244524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.578 [2024-11-20 16:24:56.244532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.578 [2024-11-20 16:24:56.244540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.578 [2024-11-20 16:24:56.257307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.578 [2024-11-20 16:24:56.257888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-11-20 16:24:56.257932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.578 [2024-11-20 16:24:56.257943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.578 [2024-11-20 16:24:56.258191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.578 [2024-11-20 16:24:56.258414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.578 [2024-11-20 16:24:56.258423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.578 [2024-11-20 16:24:56.258431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.578 [2024-11-20 16:24:56.258439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.578 [2024-11-20 16:24:56.271214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.578 [2024-11-20 16:24:56.271819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-11-20 16:24:56.271863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.578 [2024-11-20 16:24:56.271875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.578 [2024-11-20 16:24:56.272114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.578 [2024-11-20 16:24:56.272345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.578 [2024-11-20 16:24:56.272355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.578 [2024-11-20 16:24:56.272363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.578 [2024-11-20 16:24:56.272372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.578 [2024-11-20 16:24:56.285147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.578 [2024-11-20 16:24:56.285707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.578 [2024-11-20 16:24:56.285753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.578 [2024-11-20 16:24:56.285764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.578 [2024-11-20 16:24:56.286005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.578 [2024-11-20 16:24:56.286237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.578 [2024-11-20 16:24:56.286253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.578 [2024-11-20 16:24:56.286262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.579 [2024-11-20 16:24:56.286270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.579 [2024-11-20 16:24:56.299053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.579 [2024-11-20 16:24:56.299613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-11-20 16:24:56.299635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.579 [2024-11-20 16:24:56.299644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.579 [2024-11-20 16:24:56.299860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.579 [2024-11-20 16:24:56.300077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.579 [2024-11-20 16:24:56.300085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.579 [2024-11-20 16:24:56.300092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.579 [2024-11-20 16:24:56.300099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.579 [2024-11-20 16:24:56.312864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.579 [2024-11-20 16:24:56.313506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-11-20 16:24:56.313554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.579 [2024-11-20 16:24:56.313565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.579 [2024-11-20 16:24:56.313807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.579 [2024-11-20 16:24:56.314029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.579 [2024-11-20 16:24:56.314038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.579 [2024-11-20 16:24:56.314046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.579 [2024-11-20 16:24:56.314054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.579 [2024-11-20 16:24:56.326633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.579 [2024-11-20 16:24:56.327294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-11-20 16:24:56.327344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.579 [2024-11-20 16:24:56.327357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.579 [2024-11-20 16:24:56.327600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.579 [2024-11-20 16:24:56.327823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.579 [2024-11-20 16:24:56.327832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.579 [2024-11-20 16:24:56.327840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.579 [2024-11-20 16:24:56.327854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.579 [2024-11-20 16:24:56.340431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.579 [2024-11-20 16:24:56.341063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-11-20 16:24:56.341118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.579 [2024-11-20 16:24:56.341131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.579 [2024-11-20 16:24:56.341392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.579 [2024-11-20 16:24:56.341616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.579 [2024-11-20 16:24:56.341626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.579 [2024-11-20 16:24:56.341634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.579 [2024-11-20 16:24:56.341643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.579 [2024-11-20 16:24:56.354205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.579 [2024-11-20 16:24:56.354801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-11-20 16:24:56.354861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.579 [2024-11-20 16:24:56.354874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.579 [2024-11-20 16:24:56.355124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.579 [2024-11-20 16:24:56.355361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.579 [2024-11-20 16:24:56.355372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.579 [2024-11-20 16:24:56.355381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.579 [2024-11-20 16:24:56.355390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.579 [2024-11-20 16:24:56.367972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.579 [2024-11-20 16:24:56.368635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-11-20 16:24:56.368698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.579 [2024-11-20 16:24:56.368711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.579 [2024-11-20 16:24:56.368964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.579 [2024-11-20 16:24:56.369203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.579 [2024-11-20 16:24:56.369213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.579 [2024-11-20 16:24:56.369222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.579 [2024-11-20 16:24:56.369232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.579 [2024-11-20 16:24:56.381816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.579 [2024-11-20 16:24:56.382520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-11-20 16:24:56.382590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.579 [2024-11-20 16:24:56.382603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.579 [2024-11-20 16:24:56.382856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.579 [2024-11-20 16:24:56.383079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.579 [2024-11-20 16:24:56.383089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.579 [2024-11-20 16:24:56.383097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.579 [2024-11-20 16:24:56.383106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.579 [2024-11-20 16:24:56.395700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.579 [2024-11-20 16:24:56.396373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-11-20 16:24:56.396436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.579 [2024-11-20 16:24:56.396449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.579 [2024-11-20 16:24:56.396702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.579 [2024-11-20 16:24:56.396926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.579 [2024-11-20 16:24:56.396935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.579 [2024-11-20 16:24:56.396944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.579 [2024-11-20 16:24:56.396953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.579 [2024-11-20 16:24:56.409571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.579 [2024-11-20 16:24:56.410281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-11-20 16:24:56.410343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.579 [2024-11-20 16:24:56.410356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.579 [2024-11-20 16:24:56.410608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.579 [2024-11-20 16:24:56.410833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.579 [2024-11-20 16:24:56.410842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.579 [2024-11-20 16:24:56.410852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.579 [2024-11-20 16:24:56.410861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.579 [2024-11-20 16:24:56.423463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.579 [2024-11-20 16:24:56.424018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.579 [2024-11-20 16:24:56.424046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.579 [2024-11-20 16:24:56.424055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.579 [2024-11-20 16:24:56.424293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.579 [2024-11-20 16:24:56.424513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.580 [2024-11-20 16:24:56.424523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.580 [2024-11-20 16:24:56.424530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.580 [2024-11-20 16:24:56.424538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.580 [2024-11-20 16:24:56.437302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.580 [2024-11-20 16:24:56.437875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-11-20 16:24:56.437900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.580 [2024-11-20 16:24:56.437909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.580 [2024-11-20 16:24:56.438129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.580 [2024-11-20 16:24:56.438358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.580 [2024-11-20 16:24:56.438369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.580 [2024-11-20 16:24:56.438376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.580 [2024-11-20 16:24:56.438384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.580 [2024-11-20 16:24:56.451193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.580 [2024-11-20 16:24:56.451740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-11-20 16:24:56.451763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.580 [2024-11-20 16:24:56.451771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.580 [2024-11-20 16:24:56.451988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.580 [2024-11-20 16:24:56.452216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.580 [2024-11-20 16:24:56.452226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.580 [2024-11-20 16:24:56.452234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.580 [2024-11-20 16:24:56.452244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.580 [2024-11-20 16:24:56.465055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.580 [2024-11-20 16:24:56.465761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-11-20 16:24:56.465825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.580 [2024-11-20 16:24:56.465840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.580 [2024-11-20 16:24:56.466092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.580 [2024-11-20 16:24:56.466328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.580 [2024-11-20 16:24:56.466346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.580 [2024-11-20 16:24:56.466355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.580 [2024-11-20 16:24:56.466365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.580 [2024-11-20 16:24:56.478942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.580 [2024-11-20 16:24:56.479558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-11-20 16:24:56.479589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.580 [2024-11-20 16:24:56.479598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.580 [2024-11-20 16:24:56.479817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.580 [2024-11-20 16:24:56.480035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.580 [2024-11-20 16:24:56.480045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.580 [2024-11-20 16:24:56.480053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.580 [2024-11-20 16:24:56.480061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.580 [2024-11-20 16:24:56.492830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.580 [2024-11-20 16:24:56.493475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-11-20 16:24:56.493538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.580 [2024-11-20 16:24:56.493551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.580 [2024-11-20 16:24:56.493803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.580 [2024-11-20 16:24:56.494028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.580 [2024-11-20 16:24:56.494038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.580 [2024-11-20 16:24:56.494046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.580 [2024-11-20 16:24:56.494055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.580 [2024-11-20 16:24:56.506763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.580 [2024-11-20 16:24:56.507464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.580 [2024-11-20 16:24:56.507527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.580 [2024-11-20 16:24:56.507540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.580 [2024-11-20 16:24:56.507792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.580 [2024-11-20 16:24:56.508016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.580 [2024-11-20 16:24:56.508026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.580 [2024-11-20 16:24:56.508035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.580 [2024-11-20 16:24:56.508051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.844 [2024-11-20 16:24:56.520667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.844 [2024-11-20 16:24:56.521284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.844 [2024-11-20 16:24:56.521333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.844 [2024-11-20 16:24:56.521344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.844 [2024-11-20 16:24:56.521584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.844 [2024-11-20 16:24:56.521807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.844 [2024-11-20 16:24:56.521816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.844 [2024-11-20 16:24:56.521824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.844 [2024-11-20 16:24:56.521833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.844 [2024-11-20 16:24:56.534500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.844 [2024-11-20 16:24:56.535189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.844 [2024-11-20 16:24:56.535253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.844 [2024-11-20 16:24:56.535266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.844 [2024-11-20 16:24:56.535518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.844 [2024-11-20 16:24:56.535742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.844 [2024-11-20 16:24:56.535752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.844 [2024-11-20 16:24:56.535760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.844 [2024-11-20 16:24:56.535769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.844 [2024-11-20 16:24:56.548441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.844 [2024-11-20 16:24:56.549147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.844 [2024-11-20 16:24:56.549220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.844 [2024-11-20 16:24:56.549233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.844 [2024-11-20 16:24:56.549485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.844 [2024-11-20 16:24:56.549710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.844 [2024-11-20 16:24:56.549719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.844 [2024-11-20 16:24:56.549728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.844 [2024-11-20 16:24:56.549738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.844 [2024-11-20 16:24:56.562319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.844 [2024-11-20 16:24:56.563004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.844 [2024-11-20 16:24:56.563074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.844 [2024-11-20 16:24:56.563087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.844 [2024-11-20 16:24:56.563369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.844 [2024-11-20 16:24:56.563596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.844 [2024-11-20 16:24:56.563605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.844 [2024-11-20 16:24:56.563614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.844 [2024-11-20 16:24:56.563623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.844 [2024-11-20 16:24:56.576190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.844 [2024-11-20 16:24:56.576886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.844 [2024-11-20 16:24:56.576949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.844 [2024-11-20 16:24:56.576961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.844 [2024-11-20 16:24:56.577228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.844 [2024-11-20 16:24:56.577453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.844 [2024-11-20 16:24:56.577462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.844 [2024-11-20 16:24:56.577471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.844 [2024-11-20 16:24:56.577480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.844 [2024-11-20 16:24:56.590059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.844 [2024-11-20 16:24:56.590747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.844 [2024-11-20 16:24:56.590811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.844 [2024-11-20 16:24:56.590824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.844 [2024-11-20 16:24:56.591076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.844 [2024-11-20 16:24:56.591316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.844 [2024-11-20 16:24:56.591328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.844 [2024-11-20 16:24:56.591336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.844 [2024-11-20 16:24:56.591346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.844 [2024-11-20 16:24:56.603920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.844 [2024-11-20 16:24:56.604581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.844 [2024-11-20 16:24:56.604644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.844 [2024-11-20 16:24:56.604656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.844 [2024-11-20 16:24:56.604916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.844 [2024-11-20 16:24:56.605140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.844 [2024-11-20 16:24:56.605151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.844 [2024-11-20 16:24:56.605173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.844 [2024-11-20 16:24:56.605183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.844 [2024-11-20 16:24:56.617759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.844 [2024-11-20 16:24:56.618445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.844 [2024-11-20 16:24:56.618508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.845 [2024-11-20 16:24:56.618521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.845 [2024-11-20 16:24:56.618773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.845 [2024-11-20 16:24:56.618998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.845 [2024-11-20 16:24:56.619007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.845 [2024-11-20 16:24:56.619016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.845 [2024-11-20 16:24:56.619025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.845 [2024-11-20 16:24:56.631635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.845 [2024-11-20 16:24:56.632296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.845 [2024-11-20 16:24:56.632360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.845 [2024-11-20 16:24:56.632373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.845 [2024-11-20 16:24:56.632625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.845 [2024-11-20 16:24:56.632849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.845 [2024-11-20 16:24:56.632858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.845 [2024-11-20 16:24:56.632867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.845 [2024-11-20 16:24:56.632876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.845 [2024-11-20 16:24:56.645470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.845 [2024-11-20 16:24:56.646024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.845 [2024-11-20 16:24:56.646053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.845 [2024-11-20 16:24:56.646062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.845 [2024-11-20 16:24:56.646293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.845 [2024-11-20 16:24:56.646513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.845 [2024-11-20 16:24:56.646538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.845 [2024-11-20 16:24:56.646545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.845 [2024-11-20 16:24:56.646553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.845 [2024-11-20 16:24:56.659345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.845 [2024-11-20 16:24:56.659911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.845 [2024-11-20 16:24:56.659935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.845 [2024-11-20 16:24:56.659944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.845 [2024-11-20 16:24:56.660171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.845 [2024-11-20 16:24:56.660390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.845 [2024-11-20 16:24:56.660401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.845 [2024-11-20 16:24:56.660408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.845 [2024-11-20 16:24:56.660416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.845 9432.67 IOPS, 36.85 MiB/s [2024-11-20T15:24:56.781Z] [2024-11-20 16:24:56.673135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.845 [2024-11-20 16:24:56.673712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.845 [2024-11-20 16:24:56.673740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.845 [2024-11-20 16:24:56.673748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.845 [2024-11-20 16:24:56.673966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.845 [2024-11-20 16:24:56.674193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.845 [2024-11-20 16:24:56.674204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.845 [2024-11-20 16:24:56.674211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.845 [2024-11-20 16:24:56.674219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.845 [2024-11-20 16:24:56.686965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.845 [2024-11-20 16:24:56.687498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.845 [2024-11-20 16:24:56.687523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.845 [2024-11-20 16:24:56.687532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.845 [2024-11-20 16:24:56.687750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.845 [2024-11-20 16:24:56.687968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.845 [2024-11-20 16:24:56.687978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.845 [2024-11-20 16:24:56.687985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.845 [2024-11-20 16:24:56.688000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.845 [2024-11-20 16:24:56.700794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.845 [2024-11-20 16:24:56.701495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.845 [2024-11-20 16:24:56.701559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.845 [2024-11-20 16:24:56.701572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.845 [2024-11-20 16:24:56.701825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.845 [2024-11-20 16:24:56.702049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.845 [2024-11-20 16:24:56.702058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.845 [2024-11-20 16:24:56.702067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.845 [2024-11-20 16:24:56.702076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.845 [2024-11-20 16:24:56.714666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.845 [2024-11-20 16:24:56.715379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.845 [2024-11-20 16:24:56.715442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.845 [2024-11-20 16:24:56.715455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.845 [2024-11-20 16:24:56.715708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.845 [2024-11-20 16:24:56.715933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.845 [2024-11-20 16:24:56.715943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.845 [2024-11-20 16:24:56.715951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.845 [2024-11-20 16:24:56.715960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.845 [2024-11-20 16:24:56.728577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.845 [2024-11-20 16:24:56.729208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.845 [2024-11-20 16:24:56.729242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.845 [2024-11-20 16:24:56.729251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.845 [2024-11-20 16:24:56.729474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.845 [2024-11-20 16:24:56.729692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.845 [2024-11-20 16:24:56.729702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.845 [2024-11-20 16:24:56.729709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.845 [2024-11-20 16:24:56.729717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.845 [2024-11-20 16:24:56.742482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.845 [2024-11-20 16:24:56.743152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.845 [2024-11-20 16:24:56.743231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.845 [2024-11-20 16:24:56.743244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.845 [2024-11-20 16:24:56.743497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.845 [2024-11-20 16:24:56.743721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.845 [2024-11-20 16:24:56.743731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.845 [2024-11-20 16:24:56.743740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.845 [2024-11-20 16:24:56.743749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.845 [2024-11-20 16:24:56.756338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.845 [2024-11-20 16:24:56.756985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.846 [2024-11-20 16:24:56.757048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.846 [2024-11-20 16:24:56.757061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.846 [2024-11-20 16:24:56.757328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.846 [2024-11-20 16:24:56.757554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.846 [2024-11-20 16:24:56.757563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.846 [2024-11-20 16:24:56.757572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.846 [2024-11-20 16:24:56.757581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:20.846 [2024-11-20 16:24:56.770171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:20.846 [2024-11-20 16:24:56.770830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.846 [2024-11-20 16:24:56.770894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:20.846 [2024-11-20 16:24:56.770906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:20.846 [2024-11-20 16:24:56.771176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:20.846 [2024-11-20 16:24:56.771402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:20.846 [2024-11-20 16:24:56.771411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:20.846 [2024-11-20 16:24:56.771420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:20.846 [2024-11-20 16:24:56.771429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.108 [2024-11-20 16:24:56.784036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.108 [2024-11-20 16:24:56.784514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.108 [2024-11-20 16:24:56.784544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.108 [2024-11-20 16:24:56.784553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.108 [2024-11-20 16:24:56.784781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.108 [2024-11-20 16:24:56.785000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.108 [2024-11-20 16:24:56.785010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.108 [2024-11-20 16:24:56.785017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.108 [2024-11-20 16:24:56.785025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.108 [2024-11-20 16:24:56.797827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.108 [2024-11-20 16:24:56.798370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.108 [2024-11-20 16:24:56.798396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.108 [2024-11-20 16:24:56.798404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.108 [2024-11-20 16:24:56.798622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.108 [2024-11-20 16:24:56.798839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.108 [2024-11-20 16:24:56.798849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.108 [2024-11-20 16:24:56.798857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.108 [2024-11-20 16:24:56.798864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.108 [2024-11-20 16:24:56.811662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.108 [2024-11-20 16:24:56.812412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.108 [2024-11-20 16:24:56.812474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.108 [2024-11-20 16:24:56.812487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.108 [2024-11-20 16:24:56.812739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.108 [2024-11-20 16:24:56.812963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.108 [2024-11-20 16:24:56.812973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.108 [2024-11-20 16:24:56.812982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.108 [2024-11-20 16:24:56.812991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.108 [2024-11-20 16:24:56.825606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.108 [2024-11-20 16:24:56.826251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.108 [2024-11-20 16:24:56.826299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.108 [2024-11-20 16:24:56.826310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.108 [2024-11-20 16:24:56.826549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.108 [2024-11-20 16:24:56.826771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.108 [2024-11-20 16:24:56.826788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.108 [2024-11-20 16:24:56.826796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.108 [2024-11-20 16:24:56.826804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.108 [2024-11-20 16:24:56.839441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.108 [2024-11-20 16:24:56.840103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.108 [2024-11-20 16:24:56.840182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.108 [2024-11-20 16:24:56.840197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.108 [2024-11-20 16:24:56.840449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.108 [2024-11-20 16:24:56.840674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.108 [2024-11-20 16:24:56.840687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.108 [2024-11-20 16:24:56.840696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.108 [2024-11-20 16:24:56.840705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.108 [2024-11-20 16:24:56.853335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.108 [2024-11-20 16:24:56.854001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.108 [2024-11-20 16:24:56.854065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.108 [2024-11-20 16:24:56.854079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.108 [2024-11-20 16:24:56.854350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.108 [2024-11-20 16:24:56.854576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.108 [2024-11-20 16:24:56.854588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.108 [2024-11-20 16:24:56.854598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.108 [2024-11-20 16:24:56.854608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.108 [2024-11-20 16:24:56.867254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.108 [2024-11-20 16:24:56.867855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.108 [2024-11-20 16:24:56.867885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.108 [2024-11-20 16:24:56.867894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.108 [2024-11-20 16:24:56.868113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.109 [2024-11-20 16:24:56.868340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.109 [2024-11-20 16:24:56.868351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.109 [2024-11-20 16:24:56.868359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.109 [2024-11-20 16:24:56.868375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.109 [2024-11-20 16:24:56.881194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.109 [2024-11-20 16:24:56.881839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.109 [2024-11-20 16:24:56.881902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.109 [2024-11-20 16:24:56.881914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.109 [2024-11-20 16:24:56.882181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.109 [2024-11-20 16:24:56.882407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.109 [2024-11-20 16:24:56.882417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.109 [2024-11-20 16:24:56.882426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.109 [2024-11-20 16:24:56.882435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.109 [2024-11-20 16:24:56.895058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.109 [2024-11-20 16:24:56.895717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.109 [2024-11-20 16:24:56.895780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.109 [2024-11-20 16:24:56.895793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.109 [2024-11-20 16:24:56.896046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.109 [2024-11-20 16:24:56.896286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.109 [2024-11-20 16:24:56.896296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.109 [2024-11-20 16:24:56.896305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.109 [2024-11-20 16:24:56.896313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.109 [2024-11-20 16:24:56.908908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.109 [2024-11-20 16:24:56.909563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.109 [2024-11-20 16:24:56.909626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.109 [2024-11-20 16:24:56.909639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.109 [2024-11-20 16:24:56.909892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.109 [2024-11-20 16:24:56.910116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.109 [2024-11-20 16:24:56.910125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.109 [2024-11-20 16:24:56.910134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.109 [2024-11-20 16:24:56.910143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.109 [2024-11-20 16:24:56.922763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.109 [2024-11-20 16:24:56.923349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.109 [2024-11-20 16:24:56.923420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.109 [2024-11-20 16:24:56.923433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.109 [2024-11-20 16:24:56.923684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.109 [2024-11-20 16:24:56.923907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.109 [2024-11-20 16:24:56.923918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.109 [2024-11-20 16:24:56.923926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.109 [2024-11-20 16:24:56.923936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.109 [2024-11-20 16:24:56.936581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.109 [2024-11-20 16:24:56.937213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.109 [2024-11-20 16:24:56.937246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.109 [2024-11-20 16:24:56.937255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.109 [2024-11-20 16:24:56.937476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.109 [2024-11-20 16:24:56.937695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.109 [2024-11-20 16:24:56.937705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.109 [2024-11-20 16:24:56.937713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.109 [2024-11-20 16:24:56.937721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.109 [2024-11-20 16:24:56.950537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.109 [2024-11-20 16:24:56.951064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.109 [2024-11-20 16:24:56.951088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.109 [2024-11-20 16:24:56.951097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.109 [2024-11-20 16:24:56.951326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.109 [2024-11-20 16:24:56.951546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.109 [2024-11-20 16:24:56.951557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.109 [2024-11-20 16:24:56.951566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.109 [2024-11-20 16:24:56.951576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.109 [2024-11-20 16:24:56.964418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.109 [2024-11-20 16:24:56.965068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.109 [2024-11-20 16:24:56.965132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.109 [2024-11-20 16:24:56.965146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.109 [2024-11-20 16:24:56.965421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.109 [2024-11-20 16:24:56.965647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.109 [2024-11-20 16:24:56.965656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.109 [2024-11-20 16:24:56.965665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.109 [2024-11-20 16:24:56.965674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.109 [2024-11-20 16:24:56.978318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.109 [2024-11-20 16:24:56.978910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.109 [2024-11-20 16:24:56.978940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.109 [2024-11-20 16:24:56.978949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.109 [2024-11-20 16:24:56.979180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.109 [2024-11-20 16:24:56.979400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.109 [2024-11-20 16:24:56.979410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.109 [2024-11-20 16:24:56.979418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.109 [2024-11-20 16:24:56.979425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.109 [2024-11-20 16:24:56.992255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.109 [2024-11-20 16:24:56.992902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.109 [2024-11-20 16:24:56.992965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.109 [2024-11-20 16:24:56.992977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.109 [2024-11-20 16:24:56.993244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.109 [2024-11-20 16:24:56.993469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.109 [2024-11-20 16:24:56.993479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.109 [2024-11-20 16:24:56.993487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.109 [2024-11-20 16:24:56.993496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.109 [2024-11-20 16:24:57.006127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.109 [2024-11-20 16:24:57.006693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.109 [2024-11-20 16:24:57.006726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.109 [2024-11-20 16:24:57.006735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.110 [2024-11-20 16:24:57.006954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.110 [2024-11-20 16:24:57.007186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.110 [2024-11-20 16:24:57.007205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.110 [2024-11-20 16:24:57.007213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.110 [2024-11-20 16:24:57.007220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.110 [2024-11-20 16:24:57.020045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.110 [2024-11-20 16:24:57.020690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.110 [2024-11-20 16:24:57.020755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.110 [2024-11-20 16:24:57.020768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.110 [2024-11-20 16:24:57.021021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.110 [2024-11-20 16:24:57.021257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.110 [2024-11-20 16:24:57.021268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.110 [2024-11-20 16:24:57.021276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.110 [2024-11-20 16:24:57.021286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.110 [2024-11-20 16:24:57.033937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.110 [2024-11-20 16:24:57.034499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.110 [2024-11-20 16:24:57.034531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.110 [2024-11-20 16:24:57.034539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.110 [2024-11-20 16:24:57.034759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.110 [2024-11-20 16:24:57.034977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.110 [2024-11-20 16:24:57.034987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.110 [2024-11-20 16:24:57.034995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.110 [2024-11-20 16:24:57.035002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.373 [2024-11-20 16:24:57.047835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.373 [2024-11-20 16:24:57.048406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-11-20 16:24:57.048432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.373 [2024-11-20 16:24:57.048441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.373 [2024-11-20 16:24:57.048659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.373 [2024-11-20 16:24:57.048877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.373 [2024-11-20 16:24:57.048886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.373 [2024-11-20 16:24:57.048894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.373 [2024-11-20 16:24:57.048911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.373 [2024-11-20 16:24:57.061748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.373 [2024-11-20 16:24:57.062310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-11-20 16:24:57.062336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.373 [2024-11-20 16:24:57.062344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.373 [2024-11-20 16:24:57.062562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.373 [2024-11-20 16:24:57.062780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.373 [2024-11-20 16:24:57.062799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.373 [2024-11-20 16:24:57.062807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.373 [2024-11-20 16:24:57.062816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.373 [2024-11-20 16:24:57.075694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.373 [2024-11-20 16:24:57.076310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-11-20 16:24:57.076374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.373 [2024-11-20 16:24:57.076386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.373 [2024-11-20 16:24:57.076638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.373 [2024-11-20 16:24:57.076863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.373 [2024-11-20 16:24:57.076873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.373 [2024-11-20 16:24:57.076884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.373 [2024-11-20 16:24:57.076893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.373 [2024-11-20 16:24:57.089499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.373 [2024-11-20 16:24:57.090083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-11-20 16:24:57.090138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.373 [2024-11-20 16:24:57.090152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.373 [2024-11-20 16:24:57.090414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.373 [2024-11-20 16:24:57.090639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.373 [2024-11-20 16:24:57.090648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.373 [2024-11-20 16:24:57.090656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.373 [2024-11-20 16:24:57.090665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.373 [2024-11-20 16:24:57.103294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.373 [2024-11-20 16:24:57.103943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-11-20 16:24:57.104008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.373 [2024-11-20 16:24:57.104020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.373 [2024-11-20 16:24:57.104284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.373 [2024-11-20 16:24:57.104508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.373 [2024-11-20 16:24:57.104518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.373 [2024-11-20 16:24:57.104526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.374 [2024-11-20 16:24:57.104535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.374 [2024-11-20 16:24:57.117169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.374 [2024-11-20 16:24:57.117764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-11-20 16:24:57.117794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.374 [2024-11-20 16:24:57.117803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.374 [2024-11-20 16:24:57.118021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.374 [2024-11-20 16:24:57.118256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.374 [2024-11-20 16:24:57.118267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.374 [2024-11-20 16:24:57.118275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.374 [2024-11-20 16:24:57.118284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.374 [2024-11-20 16:24:57.130937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.374 [2024-11-20 16:24:57.131622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-11-20 16:24:57.131685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.374 [2024-11-20 16:24:57.131698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.374 [2024-11-20 16:24:57.131951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.374 [2024-11-20 16:24:57.132193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.374 [2024-11-20 16:24:57.132206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.374 [2024-11-20 16:24:57.132215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.374 [2024-11-20 16:24:57.132225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.374 [2024-11-20 16:24:57.144876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.374 [2024-11-20 16:24:57.145502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-11-20 16:24:57.145537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.374 [2024-11-20 16:24:57.145547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.374 [2024-11-20 16:24:57.145781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.374 [2024-11-20 16:24:57.146001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.374 [2024-11-20 16:24:57.146012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.374 [2024-11-20 16:24:57.146021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.374 [2024-11-20 16:24:57.146029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.374 [2024-11-20 16:24:57.158670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.374 [2024-11-20 16:24:57.159281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-11-20 16:24:57.159328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.374 [2024-11-20 16:24:57.159338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.374 [2024-11-20 16:24:57.159575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.374 [2024-11-20 16:24:57.159797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.374 [2024-11-20 16:24:57.159807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.374 [2024-11-20 16:24:57.159815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.374 [2024-11-20 16:24:57.159823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.374 [2024-11-20 16:24:57.172471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.374 [2024-11-20 16:24:57.173053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-11-20 16:24:57.173080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.374 [2024-11-20 16:24:57.173089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.374 [2024-11-20 16:24:57.173317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.374 [2024-11-20 16:24:57.173536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.374 [2024-11-20 16:24:57.173546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.374 [2024-11-20 16:24:57.173553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.374 [2024-11-20 16:24:57.173561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.374 [2024-11-20 16:24:57.186379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.374 [2024-11-20 16:24:57.186947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-11-20 16:24:57.186971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.374 [2024-11-20 16:24:57.186979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.374 [2024-11-20 16:24:57.187207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.374 [2024-11-20 16:24:57.187429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.374 [2024-11-20 16:24:57.187445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.374 [2024-11-20 16:24:57.187453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.374 [2024-11-20 16:24:57.187461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.374 [2024-11-20 16:24:57.200290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.374 [2024-11-20 16:24:57.200839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-11-20 16:24:57.200862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.374 [2024-11-20 16:24:57.200871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.374 [2024-11-20 16:24:57.201090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.374 [2024-11-20 16:24:57.201316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.374 [2024-11-20 16:24:57.201326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.374 [2024-11-20 16:24:57.201334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.374 [2024-11-20 16:24:57.201342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.374 [2024-11-20 16:24:57.214313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.374 [2024-11-20 16:24:57.214900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-11-20 16:24:57.214924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.374 [2024-11-20 16:24:57.214932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.375 [2024-11-20 16:24:57.215151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.375 [2024-11-20 16:24:57.215380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.375 [2024-11-20 16:24:57.215392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.375 [2024-11-20 16:24:57.215400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.375 [2024-11-20 16:24:57.215408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.375 [2024-11-20 16:24:57.228258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.375 [2024-11-20 16:24:57.228830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-11-20 16:24:57.228855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.375 [2024-11-20 16:24:57.228864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.375 [2024-11-20 16:24:57.229082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.375 [2024-11-20 16:24:57.229312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.375 [2024-11-20 16:24:57.229322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.375 [2024-11-20 16:24:57.229330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.375 [2024-11-20 16:24:57.229344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.375 [2024-11-20 16:24:57.242170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.375 [2024-11-20 16:24:57.242726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-11-20 16:24:57.242749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.375 [2024-11-20 16:24:57.242758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.375 [2024-11-20 16:24:57.242975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.375 [2024-11-20 16:24:57.243203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.375 [2024-11-20 16:24:57.243214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.375 [2024-11-20 16:24:57.243222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.375 [2024-11-20 16:24:57.243229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.375 [2024-11-20 16:24:57.256044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.375 [2024-11-20 16:24:57.256599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-11-20 16:24:57.256623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.375 [2024-11-20 16:24:57.256631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.375 [2024-11-20 16:24:57.256848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.375 [2024-11-20 16:24:57.257066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.375 [2024-11-20 16:24:57.257076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.375 [2024-11-20 16:24:57.257084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.375 [2024-11-20 16:24:57.257092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.375 [2024-11-20 16:24:57.269707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.375 [2024-11-20 16:24:57.270204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-11-20 16:24:57.270225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.375 [2024-11-20 16:24:57.270231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.375 [2024-11-20 16:24:57.270382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.375 [2024-11-20 16:24:57.270533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.375 [2024-11-20 16:24:57.270539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.375 [2024-11-20 16:24:57.270545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.375 [2024-11-20 16:24:57.270551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.375 [2024-11-20 16:24:57.282313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.375 [2024-11-20 16:24:57.282804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-11-20 16:24:57.282828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.375 [2024-11-20 16:24:57.282834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.375 [2024-11-20 16:24:57.282984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.375 [2024-11-20 16:24:57.283135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.375 [2024-11-20 16:24:57.283142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.375 [2024-11-20 16:24:57.283147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.375 [2024-11-20 16:24:57.283153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.375 [2024-11-20 16:24:57.294913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.375 [2024-11-20 16:24:57.295412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-11-20 16:24:57.295430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.375 [2024-11-20 16:24:57.295436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.375 [2024-11-20 16:24:57.295587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.375 [2024-11-20 16:24:57.295737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.375 [2024-11-20 16:24:57.295743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.375 [2024-11-20 16:24:57.295748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.375 [2024-11-20 16:24:57.295753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.638 [2024-11-20 16:24:57.307582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.638 [2024-11-20 16:24:57.308058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.638 [2024-11-20 16:24:57.308076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.638 [2024-11-20 16:24:57.308082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.638 [2024-11-20 16:24:57.308239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.638 [2024-11-20 16:24:57.308389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.638 [2024-11-20 16:24:57.308396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.638 [2024-11-20 16:24:57.308401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.638 [2024-11-20 16:24:57.308407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.638 [2024-11-20 16:24:57.320308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.638 [2024-11-20 16:24:57.320848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.638 [2024-11-20 16:24:57.320891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.638 [2024-11-20 16:24:57.320900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.638 [2024-11-20 16:24:57.321084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.638 [2024-11-20 16:24:57.321253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.638 [2024-11-20 16:24:57.321262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.638 [2024-11-20 16:24:57.321268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.638 [2024-11-20 16:24:57.321275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.638 [2024-11-20 16:24:57.332902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.638 [2024-11-20 16:24:57.333404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.638 [2024-11-20 16:24:57.333423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.638 [2024-11-20 16:24:57.333430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.638 [2024-11-20 16:24:57.333580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.638 [2024-11-20 16:24:57.333730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.638 [2024-11-20 16:24:57.333737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.638 [2024-11-20 16:24:57.333742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.638 [2024-11-20 16:24:57.333748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.638 [2024-11-20 16:24:57.345498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.639 [2024-11-20 16:24:57.345963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.639 [2024-11-20 16:24:57.345978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.639 [2024-11-20 16:24:57.345983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.639 [2024-11-20 16:24:57.346132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.639 [2024-11-20 16:24:57.346289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.639 [2024-11-20 16:24:57.346295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.639 [2024-11-20 16:24:57.346301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.639 [2024-11-20 16:24:57.346306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.639 [2024-11-20 16:24:57.358205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.639 [2024-11-20 16:24:57.358642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.639 [2024-11-20 16:24:57.358657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.639 [2024-11-20 16:24:57.358663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.639 [2024-11-20 16:24:57.358812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.639 [2024-11-20 16:24:57.358961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.639 [2024-11-20 16:24:57.358971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.639 [2024-11-20 16:24:57.358977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.639 [2024-11-20 16:24:57.358981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.639 [2024-11-20 16:24:57.370869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.639 [2024-11-20 16:24:57.371320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.639 [2024-11-20 16:24:57.371335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.639 [2024-11-20 16:24:57.371340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.639 [2024-11-20 16:24:57.371489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.639 [2024-11-20 16:24:57.371638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.639 [2024-11-20 16:24:57.371643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.639 [2024-11-20 16:24:57.371648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.639 [2024-11-20 16:24:57.371653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.639 [2024-11-20 16:24:57.383521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.639 [2024-11-20 16:24:57.383976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.639 [2024-11-20 16:24:57.383988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.639 [2024-11-20 16:24:57.383994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.639 [2024-11-20 16:24:57.384142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.639 [2024-11-20 16:24:57.384296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.639 [2024-11-20 16:24:57.384303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.639 [2024-11-20 16:24:57.384308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.639 [2024-11-20 16:24:57.384313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.639 [2024-11-20 16:24:57.396169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.639 [2024-11-20 16:24:57.396635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.639 [2024-11-20 16:24:57.396648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.639 [2024-11-20 16:24:57.396653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.639 [2024-11-20 16:24:57.396801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.639 [2024-11-20 16:24:57.396949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.639 [2024-11-20 16:24:57.396955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.639 [2024-11-20 16:24:57.396960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.639 [2024-11-20 16:24:57.396968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.639 [2024-11-20 16:24:57.408829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.639 [2024-11-20 16:24:57.409278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.639 [2024-11-20 16:24:57.409291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.639 [2024-11-20 16:24:57.409296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.639 [2024-11-20 16:24:57.409445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.639 [2024-11-20 16:24:57.409593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.639 [2024-11-20 16:24:57.409598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.639 [2024-11-20 16:24:57.409603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.639 [2024-11-20 16:24:57.409608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.639 [2024-11-20 16:24:57.421461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.639 [2024-11-20 16:24:57.421907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.639 [2024-11-20 16:24:57.421919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.639 [2024-11-20 16:24:57.421924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.639 [2024-11-20 16:24:57.422072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.639 [2024-11-20 16:24:57.422225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.639 [2024-11-20 16:24:57.422231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.639 [2024-11-20 16:24:57.422236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.639 [2024-11-20 16:24:57.422241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.639 [2024-11-20 16:24:57.434100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.639 [2024-11-20 16:24:57.434542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.639 [2024-11-20 16:24:57.434555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.639 [2024-11-20 16:24:57.434560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.639 [2024-11-20 16:24:57.434708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.639 [2024-11-20 16:24:57.434856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.639 [2024-11-20 16:24:57.434862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.639 [2024-11-20 16:24:57.434867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.639 [2024-11-20 16:24:57.434872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.639 [2024-11-20 16:24:57.446722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.639 [2024-11-20 16:24:57.447169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.639 [2024-11-20 16:24:57.447185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.639 [2024-11-20 16:24:57.447190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.639 [2024-11-20 16:24:57.447338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.639 [2024-11-20 16:24:57.447486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.639 [2024-11-20 16:24:57.447492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.639 [2024-11-20 16:24:57.447497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.639 [2024-11-20 16:24:57.447501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.639 [2024-11-20 16:24:57.459357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.639 [2024-11-20 16:24:57.459774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.639 [2024-11-20 16:24:57.459786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.639 [2024-11-20 16:24:57.459791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.639 [2024-11-20 16:24:57.459938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.639 [2024-11-20 16:24:57.460086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.639 [2024-11-20 16:24:57.460092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.639 [2024-11-20 16:24:57.460098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.639 [2024-11-20 16:24:57.460102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.640 [2024-11-20 16:24:57.471961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.640 [2024-11-20 16:24:57.472378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.640 [2024-11-20 16:24:57.472391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.640 [2024-11-20 16:24:57.472396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.640 [2024-11-20 16:24:57.472545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.640 [2024-11-20 16:24:57.472693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.640 [2024-11-20 16:24:57.472699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.640 [2024-11-20 16:24:57.472704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.640 [2024-11-20 16:24:57.472709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.640 [2024-11-20 16:24:57.484568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.640 [2024-11-20 16:24:57.485016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.640 [2024-11-20 16:24:57.485027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.640 [2024-11-20 16:24:57.485033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.640 [2024-11-20 16:24:57.485189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.640 [2024-11-20 16:24:57.485338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.640 [2024-11-20 16:24:57.485344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.640 [2024-11-20 16:24:57.485348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.640 [2024-11-20 16:24:57.485353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.640 [2024-11-20 16:24:57.497204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.640 [2024-11-20 16:24:57.497673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.640 [2024-11-20 16:24:57.497684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.640 [2024-11-20 16:24:57.497689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.640 [2024-11-20 16:24:57.497837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.640 [2024-11-20 16:24:57.497985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.640 [2024-11-20 16:24:57.497990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.640 [2024-11-20 16:24:57.497996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.640 [2024-11-20 16:24:57.498001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.640 [2024-11-20 16:24:57.509855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.640 [2024-11-20 16:24:57.510301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.640 [2024-11-20 16:24:57.510313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.640 [2024-11-20 16:24:57.510318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.640 [2024-11-20 16:24:57.510467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.640 [2024-11-20 16:24:57.510615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.640 [2024-11-20 16:24:57.510621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.640 [2024-11-20 16:24:57.510626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.640 [2024-11-20 16:24:57.510631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.640 [2024-11-20 16:24:57.522477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.640 [2024-11-20 16:24:57.522924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.640 [2024-11-20 16:24:57.522936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.640 [2024-11-20 16:24:57.522941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.640 [2024-11-20 16:24:57.523089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.640 [2024-11-20 16:24:57.523242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.640 [2024-11-20 16:24:57.523251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.640 [2024-11-20 16:24:57.523256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.640 [2024-11-20 16:24:57.523261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.640 [2024-11-20 16:24:57.535140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.640 [2024-11-20 16:24:57.535604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.640 [2024-11-20 16:24:57.535619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.640 [2024-11-20 16:24:57.535624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.640 [2024-11-20 16:24:57.535772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.640 [2024-11-20 16:24:57.535920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.640 [2024-11-20 16:24:57.535926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.640 [2024-11-20 16:24:57.535931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.640 [2024-11-20 16:24:57.535936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.640 [2024-11-20 16:24:57.547792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.640 [2024-11-20 16:24:57.548214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.640 [2024-11-20 16:24:57.548227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.640 [2024-11-20 16:24:57.548233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.640 [2024-11-20 16:24:57.548381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.640 [2024-11-20 16:24:57.548529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.640 [2024-11-20 16:24:57.548535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.640 [2024-11-20 16:24:57.548540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.640 [2024-11-20 16:24:57.548545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.640 [2024-11-20 16:24:57.560400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.640 [2024-11-20 16:24:57.560850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.640 [2024-11-20 16:24:57.560863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.640 [2024-11-20 16:24:57.560868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.640 [2024-11-20 16:24:57.561016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.640 [2024-11-20 16:24:57.561186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.640 [2024-11-20 16:24:57.561193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.640 [2024-11-20 16:24:57.561198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.640 [2024-11-20 16:24:57.561207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.902 [2024-11-20 16:24:57.573071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.902 [2024-11-20 16:24:57.573534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 16:24:57.573548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.902 [2024-11-20 16:24:57.573553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.902 [2024-11-20 16:24:57.573701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.902 [2024-11-20 16:24:57.573850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.902 [2024-11-20 16:24:57.573856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.902 [2024-11-20 16:24:57.573860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.902 [2024-11-20 16:24:57.573865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.902 [2024-11-20 16:24:57.585718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.902 [2024-11-20 16:24:57.586154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 16:24:57.586171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.902 [2024-11-20 16:24:57.586176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.902 [2024-11-20 16:24:57.586324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.902 [2024-11-20 16:24:57.586472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.902 [2024-11-20 16:24:57.586477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.902 [2024-11-20 16:24:57.586482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.902 [2024-11-20 16:24:57.586487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.902 [2024-11-20 16:24:57.598337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.902 [2024-11-20 16:24:57.598784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.902 [2024-11-20 16:24:57.598796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.902 [2024-11-20 16:24:57.598801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.902 [2024-11-20 16:24:57.598949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.902 [2024-11-20 16:24:57.599098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.903 [2024-11-20 16:24:57.599104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.903 [2024-11-20 16:24:57.599109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.903 [2024-11-20 16:24:57.599114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.903 [2024-11-20 16:24:57.610963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.903 [2024-11-20 16:24:57.611410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 16:24:57.611425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.903 [2024-11-20 16:24:57.611431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.903 [2024-11-20 16:24:57.611579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.903 [2024-11-20 16:24:57.611727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.903 [2024-11-20 16:24:57.611733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.903 [2024-11-20 16:24:57.611738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.903 [2024-11-20 16:24:57.611742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.903 [2024-11-20 16:24:57.623596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.903 [2024-11-20 16:24:57.624042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 16:24:57.624054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.903 [2024-11-20 16:24:57.624060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.903 [2024-11-20 16:24:57.624219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.903 [2024-11-20 16:24:57.624368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.903 [2024-11-20 16:24:57.624374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.903 [2024-11-20 16:24:57.624379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.903 [2024-11-20 16:24:57.624383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.903 [2024-11-20 16:24:57.636227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.903 [2024-11-20 16:24:57.636739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 16:24:57.636769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.903 [2024-11-20 16:24:57.636777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.903 [2024-11-20 16:24:57.636942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.903 [2024-11-20 16:24:57.637093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.903 [2024-11-20 16:24:57.637100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.903 [2024-11-20 16:24:57.637105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.903 [2024-11-20 16:24:57.637111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.903 [2024-11-20 16:24:57.648816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.903 [2024-11-20 16:24:57.649280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 16:24:57.649311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.903 [2024-11-20 16:24:57.649320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.903 [2024-11-20 16:24:57.649490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.903 [2024-11-20 16:24:57.649641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.903 [2024-11-20 16:24:57.649648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.903 [2024-11-20 16:24:57.649653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.903 [2024-11-20 16:24:57.649659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.903 [2024-11-20 16:24:57.661500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.903 [2024-11-20 16:24:57.661959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 16:24:57.661974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.903 [2024-11-20 16:24:57.661979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.903 [2024-11-20 16:24:57.662128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.903 [2024-11-20 16:24:57.662283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.903 [2024-11-20 16:24:57.662289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.903 [2024-11-20 16:24:57.662294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.903 [2024-11-20 16:24:57.662299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.903 7074.50 IOPS, 27.63 MiB/s [2024-11-20T15:24:57.839Z] [2024-11-20 16:24:57.674141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.903 [2024-11-20 16:24:57.674556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 16:24:57.674569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.903 [2024-11-20 16:24:57.674575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.903 [2024-11-20 16:24:57.674723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.903 [2024-11-20 16:24:57.674871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.903 [2024-11-20 16:24:57.674877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.903 [2024-11-20 16:24:57.674882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.903 [2024-11-20 16:24:57.674886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.903 [2024-11-20 16:24:57.686716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.903 [2024-11-20 16:24:57.687166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 16:24:57.687179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.903 [2024-11-20 16:24:57.687184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.903 [2024-11-20 16:24:57.687332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.903 [2024-11-20 16:24:57.687481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.903 [2024-11-20 16:24:57.687489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.903 [2024-11-20 16:24:57.687494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.903 [2024-11-20 16:24:57.687499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.903 [2024-11-20 16:24:57.699414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.903 [2024-11-20 16:24:57.699955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 16:24:57.699985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.903 [2024-11-20 16:24:57.699994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.903 [2024-11-20 16:24:57.700165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.903 [2024-11-20 16:24:57.700317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.903 [2024-11-20 16:24:57.700324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.903 [2024-11-20 16:24:57.700330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.903 [2024-11-20 16:24:57.700336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.903 [2024-11-20 16:24:57.712030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.903 [2024-11-20 16:24:57.712632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 16:24:57.712662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.903 [2024-11-20 16:24:57.712671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.903 [2024-11-20 16:24:57.712838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.903 [2024-11-20 16:24:57.712989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.903 [2024-11-20 16:24:57.712996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.903 [2024-11-20 16:24:57.713002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.903 [2024-11-20 16:24:57.713008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.903 [2024-11-20 16:24:57.724717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.903 [2024-11-20 16:24:57.725292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.903 [2024-11-20 16:24:57.725322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.903 [2024-11-20 16:24:57.725331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.904 [2024-11-20 16:24:57.725499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.904 [2024-11-20 16:24:57.725651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.904 [2024-11-20 16:24:57.725657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.904 [2024-11-20 16:24:57.725663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.904 [2024-11-20 16:24:57.725672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.904 [2024-11-20 16:24:57.737384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.904 [2024-11-20 16:24:57.737796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 16:24:57.737811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.904 [2024-11-20 16:24:57.737816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.904 [2024-11-20 16:24:57.737965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.904 [2024-11-20 16:24:57.738113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.904 [2024-11-20 16:24:57.738119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.904 [2024-11-20 16:24:57.738124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.904 [2024-11-20 16:24:57.738128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.904 [2024-11-20 16:24:57.749961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.904 [2024-11-20 16:24:57.750417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 16:24:57.750430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.904 [2024-11-20 16:24:57.750436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.904 [2024-11-20 16:24:57.750584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.904 [2024-11-20 16:24:57.750732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.904 [2024-11-20 16:24:57.750738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.904 [2024-11-20 16:24:57.750743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.904 [2024-11-20 16:24:57.750747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.904 [2024-11-20 16:24:57.762582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.904 [2024-11-20 16:24:57.762983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 16:24:57.762995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.904 [2024-11-20 16:24:57.763001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.904 [2024-11-20 16:24:57.763148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.904 [2024-11-20 16:24:57.763303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.904 [2024-11-20 16:24:57.763309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.904 [2024-11-20 16:24:57.763314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.904 [2024-11-20 16:24:57.763319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.904 [2024-11-20 16:24:57.775182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.904 [2024-11-20 16:24:57.775641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 16:24:57.775654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.904 [2024-11-20 16:24:57.775659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.904 [2024-11-20 16:24:57.775808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.904 [2024-11-20 16:24:57.775956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.904 [2024-11-20 16:24:57.775962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.904 [2024-11-20 16:24:57.775967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.904 [2024-11-20 16:24:57.775971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.904 [2024-11-20 16:24:57.787804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.904 [2024-11-20 16:24:57.788218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 16:24:57.788230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.904 [2024-11-20 16:24:57.788235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.904 [2024-11-20 16:24:57.788383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.904 [2024-11-20 16:24:57.788532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.904 [2024-11-20 16:24:57.788537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.904 [2024-11-20 16:24:57.788542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.904 [2024-11-20 16:24:57.788547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.904 [2024-11-20 16:24:57.800373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.904 [2024-11-20 16:24:57.800906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 16:24:57.800936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.904 [2024-11-20 16:24:57.800944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.904 [2024-11-20 16:24:57.801108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.904 [2024-11-20 16:24:57.801266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.904 [2024-11-20 16:24:57.801274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.904 [2024-11-20 16:24:57.801280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.904 [2024-11-20 16:24:57.801285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.904 [2024-11-20 16:24:57.812985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.904 [2024-11-20 16:24:57.813467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 16:24:57.813498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.904 [2024-11-20 16:24:57.813506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.904 [2024-11-20 16:24:57.813674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.904 [2024-11-20 16:24:57.813826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.904 [2024-11-20 16:24:57.813832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.904 [2024-11-20 16:24:57.813838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.904 [2024-11-20 16:24:57.813844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:21.904 [2024-11-20 16:24:57.825560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:21.904 [2024-11-20 16:24:57.826077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.904 [2024-11-20 16:24:57.826107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:21.904 [2024-11-20 16:24:57.826115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:21.904 [2024-11-20 16:24:57.826287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:21.904 [2024-11-20 16:24:57.826439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:21.904 [2024-11-20 16:24:57.826445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:21.904 [2024-11-20 16:24:57.826451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:21.904 [2024-11-20 16:24:57.826457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.167 [2024-11-20 16:24:57.838167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.167 [2024-11-20 16:24:57.838702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-11-20 16:24:57.838732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.167 [2024-11-20 16:24:57.838741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.167 [2024-11-20 16:24:57.838906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.167 [2024-11-20 16:24:57.839057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.167 [2024-11-20 16:24:57.839064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.167 [2024-11-20 16:24:57.839069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.167 [2024-11-20 16:24:57.839075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.167 [2024-11-20 16:24:57.850774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.167 [2024-11-20 16:24:57.851282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-11-20 16:24:57.851313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.167 [2024-11-20 16:24:57.851321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.167 [2024-11-20 16:24:57.851488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.167 [2024-11-20 16:24:57.851640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.167 [2024-11-20 16:24:57.851651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.167 [2024-11-20 16:24:57.851656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.167 [2024-11-20 16:24:57.851662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.167 [2024-11-20 16:24:57.863364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.167 [2024-11-20 16:24:57.863907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-11-20 16:24:57.863937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.167 [2024-11-20 16:24:57.863946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.167 [2024-11-20 16:24:57.864110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.167 [2024-11-20 16:24:57.864269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.167 [2024-11-20 16:24:57.864277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.167 [2024-11-20 16:24:57.864282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.167 [2024-11-20 16:24:57.864288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.167 [2024-11-20 16:24:57.875996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.167 [2024-11-20 16:24:57.876552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-11-20 16:24:57.876582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.167 [2024-11-20 16:24:57.876590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.167 [2024-11-20 16:24:57.876754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.167 [2024-11-20 16:24:57.876906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.167 [2024-11-20 16:24:57.876912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.167 [2024-11-20 16:24:57.876918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.167 [2024-11-20 16:24:57.876924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.167 [2024-11-20 16:24:57.888631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.167 [2024-11-20 16:24:57.889189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-11-20 16:24:57.889219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.167 [2024-11-20 16:24:57.889228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.167 [2024-11-20 16:24:57.889392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.167 [2024-11-20 16:24:57.889543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.167 [2024-11-20 16:24:57.889550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.167 [2024-11-20 16:24:57.889555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.167 [2024-11-20 16:24:57.889564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.167 [2024-11-20 16:24:57.901266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.167 [2024-11-20 16:24:57.901721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.167 [2024-11-20 16:24:57.901735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.167 [2024-11-20 16:24:57.901741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.167 [2024-11-20 16:24:57.901889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.167 [2024-11-20 16:24:57.902037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.167 [2024-11-20 16:24:57.902043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.167 [2024-11-20 16:24:57.902048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.167 [2024-11-20 16:24:57.902053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.167 [2024-11-20 16:24:57.913884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.167 [2024-11-20 16:24:57.914252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-11-20 16:24:57.914266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.168 [2024-11-20 16:24:57.914271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.168 [2024-11-20 16:24:57.914419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.168 [2024-11-20 16:24:57.914567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.168 [2024-11-20 16:24:57.914573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.168 [2024-11-20 16:24:57.914578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.168 [2024-11-20 16:24:57.914582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.168 [2024-11-20 16:24:57.926558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.168 [2024-11-20 16:24:57.927049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-11-20 16:24:57.927079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.168 [2024-11-20 16:24:57.927088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.168 [2024-11-20 16:24:57.927259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.168 [2024-11-20 16:24:57.927411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.168 [2024-11-20 16:24:57.927417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.168 [2024-11-20 16:24:57.927423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.168 [2024-11-20 16:24:57.927429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.168 [2024-11-20 16:24:57.939129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.168 [2024-11-20 16:24:57.939599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-11-20 16:24:57.939629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.168 [2024-11-20 16:24:57.939638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.168 [2024-11-20 16:24:57.939802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.168 [2024-11-20 16:24:57.939954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.168 [2024-11-20 16:24:57.939960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.168 [2024-11-20 16:24:57.939965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.168 [2024-11-20 16:24:57.939971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.168 [2024-11-20 16:24:57.951822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.168 [2024-11-20 16:24:57.952286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-11-20 16:24:57.952317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.168 [2024-11-20 16:24:57.952325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.168 [2024-11-20 16:24:57.952490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.168 [2024-11-20 16:24:57.952642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.168 [2024-11-20 16:24:57.952648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.168 [2024-11-20 16:24:57.952653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.168 [2024-11-20 16:24:57.952659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.168 [2024-11-20 16:24:57.964510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.168 [2024-11-20 16:24:57.965105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-11-20 16:24:57.965135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.168 [2024-11-20 16:24:57.965144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.168 [2024-11-20 16:24:57.965317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.168 [2024-11-20 16:24:57.965470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.168 [2024-11-20 16:24:57.965476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.168 [2024-11-20 16:24:57.965481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.168 [2024-11-20 16:24:57.965488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.168 [2024-11-20 16:24:57.977195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.168 [2024-11-20 16:24:57.977658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-11-20 16:24:57.977673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.168 [2024-11-20 16:24:57.977678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.168 [2024-11-20 16:24:57.977832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.168 [2024-11-20 16:24:57.977980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.168 [2024-11-20 16:24:57.977986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.168 [2024-11-20 16:24:57.977991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.168 [2024-11-20 16:24:57.977996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.168 [2024-11-20 16:24:57.989851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.168 [2024-11-20 16:24:57.990301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-11-20 16:24:57.990332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.168 [2024-11-20 16:24:57.990340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.168 [2024-11-20 16:24:57.990507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.168 [2024-11-20 16:24:57.990659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.168 [2024-11-20 16:24:57.990665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.168 [2024-11-20 16:24:57.990671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.168 [2024-11-20 16:24:57.990677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.168 [2024-11-20 16:24:58.002525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.168 [2024-11-20 16:24:58.003068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.168 [2024-11-20 16:24:58.003098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.168 [2024-11-20 16:24:58.003107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.168 [2024-11-20 16:24:58.003278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.168 [2024-11-20 16:24:58.003430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.168 [2024-11-20 16:24:58.003437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.168 [2024-11-20 16:24:58.003442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.168 [2024-11-20 16:24:58.003448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.168 [2024-11-20 16:24:58.015143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.169 [2024-11-20 16:24:58.015689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.169 [2024-11-20 16:24:58.015720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.169 [2024-11-20 16:24:58.015728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.169 [2024-11-20 16:24:58.015893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.169 [2024-11-20 16:24:58.016044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.169 [2024-11-20 16:24:58.016054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.169 [2024-11-20 16:24:58.016060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.169 [2024-11-20 16:24:58.016065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.169 [2024-11-20 16:24:58.027776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.169 [2024-11-20 16:24:58.028350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.169 [2024-11-20 16:24:58.028381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.169 [2024-11-20 16:24:58.028389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.169 [2024-11-20 16:24:58.028553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.169 [2024-11-20 16:24:58.028705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.169 [2024-11-20 16:24:58.028712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.169 [2024-11-20 16:24:58.028717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.169 [2024-11-20 16:24:58.028723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.169 [2024-11-20 16:24:58.040424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.169 [2024-11-20 16:24:58.040972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.169 [2024-11-20 16:24:58.041002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.169 [2024-11-20 16:24:58.041011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.169 [2024-11-20 16:24:58.041182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.169 [2024-11-20 16:24:58.041334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.169 [2024-11-20 16:24:58.041341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.169 [2024-11-20 16:24:58.041347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.169 [2024-11-20 16:24:58.041353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.169 [2024-11-20 16:24:58.053043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.169 [2024-11-20 16:24:58.053601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.169 [2024-11-20 16:24:58.053631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.169 [2024-11-20 16:24:58.053639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.169 [2024-11-20 16:24:58.053804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.169 [2024-11-20 16:24:58.053955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.169 [2024-11-20 16:24:58.053961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.169 [2024-11-20 16:24:58.053967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.169 [2024-11-20 16:24:58.053977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.169 [2024-11-20 16:24:58.065679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.169 [2024-11-20 16:24:58.066147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.169 [2024-11-20 16:24:58.066183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.169 [2024-11-20 16:24:58.066192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.169 [2024-11-20 16:24:58.066356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.169 [2024-11-20 16:24:58.066507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.169 [2024-11-20 16:24:58.066514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.169 [2024-11-20 16:24:58.066519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.169 [2024-11-20 16:24:58.066525] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.169 [2024-11-20 16:24:58.078370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.169 [2024-11-20 16:24:58.078733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.169 [2024-11-20 16:24:58.078751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.169 [2024-11-20 16:24:58.078756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.169 [2024-11-20 16:24:58.078907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.169 [2024-11-20 16:24:58.079056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.169 [2024-11-20 16:24:58.079062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.169 [2024-11-20 16:24:58.079067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.169 [2024-11-20 16:24:58.079072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.169 [2024-11-20 16:24:58.091057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.169 [2024-11-20 16:24:58.091625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.169 [2024-11-20 16:24:58.091656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.169 [2024-11-20 16:24:58.091665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.169 [2024-11-20 16:24:58.091830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.169 [2024-11-20 16:24:58.091983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.169 [2024-11-20 16:24:58.091990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.169 [2024-11-20 16:24:58.091995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.169 [2024-11-20 16:24:58.092001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.431 [2024-11-20 16:24:58.103709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.431 [2024-11-20 16:24:58.104167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.431 [2024-11-20 16:24:58.104186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.431 [2024-11-20 16:24:58.104192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.431 [2024-11-20 16:24:58.104341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.431 [2024-11-20 16:24:58.104489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.431 [2024-11-20 16:24:58.104495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.431 [2024-11-20 16:24:58.104500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.431 [2024-11-20 16:24:58.104505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.431 [2024-11-20 16:24:58.116347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.431 [2024-11-20 16:24:58.116754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.431 [2024-11-20 16:24:58.116766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.431 [2024-11-20 16:24:58.116772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.431 [2024-11-20 16:24:58.116920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.431 [2024-11-20 16:24:58.117068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.431 [2024-11-20 16:24:58.117074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.431 [2024-11-20 16:24:58.117079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.431 [2024-11-20 16:24:58.117084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.431 [2024-11-20 16:24:58.128920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.431 [2024-11-20 16:24:58.129368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.431 [2024-11-20 16:24:58.129381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.431 [2024-11-20 16:24:58.129387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.431 [2024-11-20 16:24:58.129535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.431 [2024-11-20 16:24:58.129683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.431 [2024-11-20 16:24:58.129689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.431 [2024-11-20 16:24:58.129694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.431 [2024-11-20 16:24:58.129698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.432 [2024-11-20 16:24:58.141534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.432 [2024-11-20 16:24:58.141981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.432 [2024-11-20 16:24:58.141993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.432 [2024-11-20 16:24:58.141998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.432 [2024-11-20 16:24:58.142149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.432 [2024-11-20 16:24:58.142303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.432 [2024-11-20 16:24:58.142309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.432 [2024-11-20 16:24:58.142314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.432 [2024-11-20 16:24:58.142319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.432 [2024-11-20 16:24:58.154146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.432 [2024-11-20 16:24:58.154703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.432 [2024-11-20 16:24:58.154733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.432 [2024-11-20 16:24:58.154742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.432 [2024-11-20 16:24:58.154906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.432 [2024-11-20 16:24:58.155058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.432 [2024-11-20 16:24:58.155064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.432 [2024-11-20 16:24:58.155069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.432 [2024-11-20 16:24:58.155075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.432 [2024-11-20 16:24:58.166786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.432 [2024-11-20 16:24:58.167266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.432 [2024-11-20 16:24:58.167297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.432 [2024-11-20 16:24:58.167306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.432 [2024-11-20 16:24:58.167472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.432 [2024-11-20 16:24:58.167632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.432 [2024-11-20 16:24:58.167639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.432 [2024-11-20 16:24:58.167644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.432 [2024-11-20 16:24:58.167650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.432 [2024-11-20 16:24:58.179496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.432 [2024-11-20 16:24:58.180046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.432 [2024-11-20 16:24:58.180076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.432 [2024-11-20 16:24:58.180085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.432 [2024-11-20 16:24:58.180256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.432 [2024-11-20 16:24:58.180408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.432 [2024-11-20 16:24:58.180418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.432 [2024-11-20 16:24:58.180424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.432 [2024-11-20 16:24:58.180430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.432 [2024-11-20 16:24:58.192150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.432 [2024-11-20 16:24:58.192709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.432 [2024-11-20 16:24:58.192740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.432 [2024-11-20 16:24:58.192749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.432 [2024-11-20 16:24:58.192914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.432 [2024-11-20 16:24:58.193067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.432 [2024-11-20 16:24:58.193074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.432 [2024-11-20 16:24:58.193079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.432 [2024-11-20 16:24:58.193085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.432 [2024-11-20 16:24:58.204795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.432 [2024-11-20 16:24:58.205223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.432 [2024-11-20 16:24:58.205253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.432 [2024-11-20 16:24:58.205262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.432 [2024-11-20 16:24:58.205429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.432 [2024-11-20 16:24:58.205581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.432 [2024-11-20 16:24:58.205587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.432 [2024-11-20 16:24:58.205592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.432 [2024-11-20 16:24:58.205598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.432 [2024-11-20 16:24:58.217445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.432 [2024-11-20 16:24:58.218009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.432 [2024-11-20 16:24:58.218040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.432 [2024-11-20 16:24:58.218049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.432 [2024-11-20 16:24:58.218221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.432 [2024-11-20 16:24:58.218374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.432 [2024-11-20 16:24:58.218381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.432 [2024-11-20 16:24:58.218386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.432 [2024-11-20 16:24:58.218396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.432 [2024-11-20 16:24:58.230105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.432 [2024-11-20 16:24:58.230694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.432 [2024-11-20 16:24:58.230725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.432 [2024-11-20 16:24:58.230734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.432 [2024-11-20 16:24:58.230899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.432 [2024-11-20 16:24:58.231050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.432 [2024-11-20 16:24:58.231058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.432 [2024-11-20 16:24:58.231063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.432 [2024-11-20 16:24:58.231069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.432 [2024-11-20 16:24:58.242846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.432 [2024-11-20 16:24:58.243306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.432 [2024-11-20 16:24:58.243321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.432 [2024-11-20 16:24:58.243326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.433 [2024-11-20 16:24:58.243475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.433 [2024-11-20 16:24:58.243624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.433 [2024-11-20 16:24:58.243630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.433 [2024-11-20 16:24:58.243635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.433 [2024-11-20 16:24:58.243640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.433 [2024-11-20 16:24:58.255484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.433 [2024-11-20 16:24:58.255998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.433 [2024-11-20 16:24:58.256028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.433 [2024-11-20 16:24:58.256037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.433 [2024-11-20 16:24:58.256208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.433 [2024-11-20 16:24:58.256360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.433 [2024-11-20 16:24:58.256366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.433 [2024-11-20 16:24:58.256372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.433 [2024-11-20 16:24:58.256377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.433 [2024-11-20 16:24:58.268139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.433 [2024-11-20 16:24:58.268688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.433 [2024-11-20 16:24:58.268722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.433 [2024-11-20 16:24:58.268730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.433 [2024-11-20 16:24:58.268895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.433 [2024-11-20 16:24:58.269046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.433 [2024-11-20 16:24:58.269052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.433 [2024-11-20 16:24:58.269058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.433 [2024-11-20 16:24:58.269064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.433 [2024-11-20 16:24:58.280773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.433 [2024-11-20 16:24:58.281288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.433 [2024-11-20 16:24:58.281319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.433 [2024-11-20 16:24:58.281328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.433 [2024-11-20 16:24:58.281494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.433 [2024-11-20 16:24:58.281646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.433 [2024-11-20 16:24:58.281652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.433 [2024-11-20 16:24:58.281658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.433 [2024-11-20 16:24:58.281663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.433 [2024-11-20 16:24:58.293378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.433 [2024-11-20 16:24:58.293942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.433 [2024-11-20 16:24:58.293972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.433 [2024-11-20 16:24:58.293981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.433 [2024-11-20 16:24:58.294145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.433 [2024-11-20 16:24:58.294304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.433 [2024-11-20 16:24:58.294312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.433 [2024-11-20 16:24:58.294318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.433 [2024-11-20 16:24:58.294323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.433 [2024-11-20 16:24:58.306029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.433 [2024-11-20 16:24:58.306536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.433 [2024-11-20 16:24:58.306552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.433 [2024-11-20 16:24:58.306558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.433 [2024-11-20 16:24:58.306710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.433 [2024-11-20 16:24:58.306858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.433 [2024-11-20 16:24:58.306864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.433 [2024-11-20 16:24:58.306869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.433 [2024-11-20 16:24:58.306874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.433 [2024-11-20 16:24:58.318706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.433 [2024-11-20 16:24:58.319154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.433 [2024-11-20 16:24:58.319172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.433 [2024-11-20 16:24:58.319177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.433 [2024-11-20 16:24:58.319325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.433 [2024-11-20 16:24:58.319474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.433 [2024-11-20 16:24:58.319479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.433 [2024-11-20 16:24:58.319484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.433 [2024-11-20 16:24:58.319489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.433 [2024-11-20 16:24:58.331327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.433 [2024-11-20 16:24:58.331772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.433 [2024-11-20 16:24:58.331784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.433 [2024-11-20 16:24:58.331790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.433 [2024-11-20 16:24:58.331938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.433 [2024-11-20 16:24:58.332086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.433 [2024-11-20 16:24:58.332092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.433 [2024-11-20 16:24:58.332096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.433 [2024-11-20 16:24:58.332101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.433 [2024-11-20 16:24:58.343926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.433 [2024-11-20 16:24:58.344478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.433 [2024-11-20 16:24:58.344509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.433 [2024-11-20 16:24:58.344517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.433 [2024-11-20 16:24:58.344684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.433 [2024-11-20 16:24:58.344836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.433 [2024-11-20 16:24:58.344846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.433 [2024-11-20 16:24:58.344852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.433 [2024-11-20 16:24:58.344857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.433 [2024-11-20 16:24:58.356556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.433 [2024-11-20 16:24:58.357013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.433 [2024-11-20 16:24:58.357028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.433 [2024-11-20 16:24:58.357034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.433 [2024-11-20 16:24:58.357188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.433 [2024-11-20 16:24:58.357337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.433 [2024-11-20 16:24:58.357343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.434 [2024-11-20 16:24:58.357348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.434 [2024-11-20 16:24:58.357353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.696 [2024-11-20 16:24:58.369197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.696 [2024-11-20 16:24:58.369649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.696 [2024-11-20 16:24:58.369662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.696 [2024-11-20 16:24:58.369668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.696 [2024-11-20 16:24:58.369816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.696 [2024-11-20 16:24:58.369966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.696 [2024-11-20 16:24:58.369971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.696 [2024-11-20 16:24:58.369976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.696 [2024-11-20 16:24:58.369981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.696 [2024-11-20 16:24:58.381816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.696 [2024-11-20 16:24:58.382125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.696 [2024-11-20 16:24:58.382138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.696 [2024-11-20 16:24:58.382144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.696 [2024-11-20 16:24:58.382296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.696 [2024-11-20 16:24:58.382446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.696 [2024-11-20 16:24:58.382451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.696 [2024-11-20 16:24:58.382456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.696 [2024-11-20 16:24:58.382464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.696 [2024-11-20 16:24:58.394454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.696 [2024-11-20 16:24:58.394870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.696 [2024-11-20 16:24:58.394883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.696 [2024-11-20 16:24:58.394888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.697 [2024-11-20 16:24:58.395036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.697 [2024-11-20 16:24:58.395190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.697 [2024-11-20 16:24:58.395196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.697 [2024-11-20 16:24:58.395201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.697 [2024-11-20 16:24:58.395205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.697 [2024-11-20 16:24:58.407043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.697 [2024-11-20 16:24:58.407598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.697 [2024-11-20 16:24:58.407629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.697 [2024-11-20 16:24:58.407637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.697 [2024-11-20 16:24:58.407801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.697 [2024-11-20 16:24:58.407953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.697 [2024-11-20 16:24:58.407959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.697 [2024-11-20 16:24:58.407965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.697 [2024-11-20 16:24:58.407970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.697 [2024-11-20 16:24:58.419674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.697 [2024-11-20 16:24:58.420174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.697 [2024-11-20 16:24:58.420190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.697 [2024-11-20 16:24:58.420195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.697 [2024-11-20 16:24:58.420344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.697 [2024-11-20 16:24:58.420492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.697 [2024-11-20 16:24:58.420498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.697 [2024-11-20 16:24:58.420503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.697 [2024-11-20 16:24:58.420507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.697 [2024-11-20 16:24:58.432349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.697 [2024-11-20 16:24:58.432896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.697 [2024-11-20 16:24:58.432930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.697 [2024-11-20 16:24:58.432939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.697 [2024-11-20 16:24:58.433103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.697 [2024-11-20 16:24:58.433262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.697 [2024-11-20 16:24:58.433269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.697 [2024-11-20 16:24:58.433274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.697 [2024-11-20 16:24:58.433280] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.697 [2024-11-20 16:24:58.444975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.697 [2024-11-20 16:24:58.445462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.697 [2024-11-20 16:24:58.445493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.697 [2024-11-20 16:24:58.445501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.697 [2024-11-20 16:24:58.445666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.697 [2024-11-20 16:24:58.445818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.697 [2024-11-20 16:24:58.445824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.697 [2024-11-20 16:24:58.445829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.697 [2024-11-20 16:24:58.445835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.697 [2024-11-20 16:24:58.457673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.697 [2024-11-20 16:24:58.458126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.697 [2024-11-20 16:24:58.458141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.697 [2024-11-20 16:24:58.458146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.697 [2024-11-20 16:24:58.458316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.697 [2024-11-20 16:24:58.458466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.697 [2024-11-20 16:24:58.458472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.697 [2024-11-20 16:24:58.458477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.697 [2024-11-20 16:24:58.458482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.697 [2024-11-20 16:24:58.470318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.697 [2024-11-20 16:24:58.470834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.697 [2024-11-20 16:24:58.470864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.697 [2024-11-20 16:24:58.470873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.697 [2024-11-20 16:24:58.471041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.697 [2024-11-20 16:24:58.471200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.697 [2024-11-20 16:24:58.471208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.697 [2024-11-20 16:24:58.471214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.697 [2024-11-20 16:24:58.471220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.697 [2024-11-20 16:24:58.482915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.697 [2024-11-20 16:24:58.483519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.697 [2024-11-20 16:24:58.483550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.697 [2024-11-20 16:24:58.483559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.697 [2024-11-20 16:24:58.483723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.697 [2024-11-20 16:24:58.483874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.697 [2024-11-20 16:24:58.483881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.697 [2024-11-20 16:24:58.483886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.697 [2024-11-20 16:24:58.483892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.697 [2024-11-20 16:24:58.495600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.697 [2024-11-20 16:24:58.495957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.697 [2024-11-20 16:24:58.495972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.697 [2024-11-20 16:24:58.495978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.697 [2024-11-20 16:24:58.496126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.697 [2024-11-20 16:24:58.496282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.697 [2024-11-20 16:24:58.496289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.697 [2024-11-20 16:24:58.496294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.697 [2024-11-20 16:24:58.496299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.697 [2024-11-20 16:24:58.508266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.697 [2024-11-20 16:24:58.508782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.697 [2024-11-20 16:24:58.508813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.697 [2024-11-20 16:24:58.508821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.697 [2024-11-20 16:24:58.508986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.697 [2024-11-20 16:24:58.509137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.697 [2024-11-20 16:24:58.509147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.697 [2024-11-20 16:24:58.509153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.697 [2024-11-20 16:24:58.509166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.697 [2024-11-20 16:24:58.520858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.697 [2024-11-20 16:24:58.521438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.697 [2024-11-20 16:24:58.521468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.698 [2024-11-20 16:24:58.521477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.698 [2024-11-20 16:24:58.521642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.698 [2024-11-20 16:24:58.521793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.698 [2024-11-20 16:24:58.521800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.698 [2024-11-20 16:24:58.521805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.698 [2024-11-20 16:24:58.521811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.698 [2024-11-20 16:24:58.533525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.698 [2024-11-20 16:24:58.534077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.698 [2024-11-20 16:24:58.534107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.698 [2024-11-20 16:24:58.534116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.698 [2024-11-20 16:24:58.534290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.698 [2024-11-20 16:24:58.534442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.698 [2024-11-20 16:24:58.534448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.698 [2024-11-20 16:24:58.534454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.698 [2024-11-20 16:24:58.534460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.698 [2024-11-20 16:24:58.546153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.698 [2024-11-20 16:24:58.546698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.698 [2024-11-20 16:24:58.546729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.698 [2024-11-20 16:24:58.546738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.698 [2024-11-20 16:24:58.546902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.698 [2024-11-20 16:24:58.547053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.698 [2024-11-20 16:24:58.547059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.698 [2024-11-20 16:24:58.547065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.698 [2024-11-20 16:24:58.547077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.698 [2024-11-20 16:24:58.558779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.698 [2024-11-20 16:24:58.559259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.698 [2024-11-20 16:24:58.559290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.698 [2024-11-20 16:24:58.559298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.698 [2024-11-20 16:24:58.559465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.698 [2024-11-20 16:24:58.559617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.698 [2024-11-20 16:24:58.559623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.698 [2024-11-20 16:24:58.559629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.698 [2024-11-20 16:24:58.559635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.698 [2024-11-20 16:24:58.571483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.698 [2024-11-20 16:24:58.571940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.698 [2024-11-20 16:24:58.571971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.698 [2024-11-20 16:24:58.571980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.698 [2024-11-20 16:24:58.572145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.698 [2024-11-20 16:24:58.572305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.698 [2024-11-20 16:24:58.572313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.698 [2024-11-20 16:24:58.572318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.698 [2024-11-20 16:24:58.572324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.698 [2024-11-20 16:24:58.584164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.698 [2024-11-20 16:24:58.584695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.698 [2024-11-20 16:24:58.584726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.698 [2024-11-20 16:24:58.584734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.698 [2024-11-20 16:24:58.584898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.698 [2024-11-20 16:24:58.585049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.698 [2024-11-20 16:24:58.585056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.698 [2024-11-20 16:24:58.585061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.698 [2024-11-20 16:24:58.585067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.698 [2024-11-20 16:24:58.596765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.698 [2024-11-20 16:24:58.597236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.698 [2024-11-20 16:24:58.597270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.698 [2024-11-20 16:24:58.597279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.698 [2024-11-20 16:24:58.597446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.698 [2024-11-20 16:24:58.597597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.698 [2024-11-20 16:24:58.597603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.698 [2024-11-20 16:24:58.597609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.698 [2024-11-20 16:24:58.597615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.698 [2024-11-20 16:24:58.609341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.698 [2024-11-20 16:24:58.609892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.698 [2024-11-20 16:24:58.609922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.698 [2024-11-20 16:24:58.609931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.698 [2024-11-20 16:24:58.610095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.698 [2024-11-20 16:24:58.610254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.698 [2024-11-20 16:24:58.610261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.698 [2024-11-20 16:24:58.610267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.698 [2024-11-20 16:24:58.610273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.698 [2024-11-20 16:24:58.621979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.698 [2024-11-20 16:24:58.622527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.698 [2024-11-20 16:24:58.622558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.698 [2024-11-20 16:24:58.622567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.698 [2024-11-20 16:24:58.622731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.698 [2024-11-20 16:24:58.622883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.698 [2024-11-20 16:24:58.622890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.698 [2024-11-20 16:24:58.622896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.698 [2024-11-20 16:24:58.622901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.960 [2024-11-20 16:24:58.634623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.960 [2024-11-20 16:24:58.635077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.960 [2024-11-20 16:24:58.635092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.960 [2024-11-20 16:24:58.635098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.960 [2024-11-20 16:24:58.635256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.960 [2024-11-20 16:24:58.635405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.961 [2024-11-20 16:24:58.635411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.961 [2024-11-20 16:24:58.635416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.961 [2024-11-20 16:24:58.635421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.961 [2024-11-20 16:24:58.647265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.961 [2024-11-20 16:24:58.647611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.961 [2024-11-20 16:24:58.647624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.961 [2024-11-20 16:24:58.647629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.961 [2024-11-20 16:24:58.647777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.961 [2024-11-20 16:24:58.647925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.961 [2024-11-20 16:24:58.647932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.961 [2024-11-20 16:24:58.647937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.961 [2024-11-20 16:24:58.647941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.961 [2024-11-20 16:24:58.659833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.961 [2024-11-20 16:24:58.660293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.961 [2024-11-20 16:24:58.660307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.961 [2024-11-20 16:24:58.660312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.961 [2024-11-20 16:24:58.660460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.961 [2024-11-20 16:24:58.660608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.961 [2024-11-20 16:24:58.660615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.961 [2024-11-20 16:24:58.660620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.961 [2024-11-20 16:24:58.660625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.961 5659.60 IOPS, 22.11 MiB/s [2024-11-20T15:24:58.897Z] [2024-11-20 16:24:58.672618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.961 [2024-11-20 16:24:58.673065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.961 [2024-11-20 16:24:58.673078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.961 [2024-11-20 16:24:58.673084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.961 [2024-11-20 16:24:58.673236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.961 [2024-11-20 16:24:58.673385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.961 [2024-11-20 16:24:58.673394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.961 [2024-11-20 16:24:58.673399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.961 [2024-11-20 16:24:58.673404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.961 [2024-11-20 16:24:58.685246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.961 [2024-11-20 16:24:58.685693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.961 [2024-11-20 16:24:58.685705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.961 [2024-11-20 16:24:58.685710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.961 [2024-11-20 16:24:58.685858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.961 [2024-11-20 16:24:58.686006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.961 [2024-11-20 16:24:58.686012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.961 [2024-11-20 16:24:58.686017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.961 [2024-11-20 16:24:58.686022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.961 [2024-11-20 16:24:58.697866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.961 [2024-11-20 16:24:58.698457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.961 [2024-11-20 16:24:58.698487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.961 [2024-11-20 16:24:58.698496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.961 [2024-11-20 16:24:58.698660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.961 [2024-11-20 16:24:58.698812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.961 [2024-11-20 16:24:58.698819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.961 [2024-11-20 16:24:58.698824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.961 [2024-11-20 16:24:58.698830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.961 [2024-11-20 16:24:58.710537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.961 [2024-11-20 16:24:58.711065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.961 [2024-11-20 16:24:58.711095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.961 [2024-11-20 16:24:58.711104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.961 [2024-11-20 16:24:58.711278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.961 [2024-11-20 16:24:58.711431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.961 [2024-11-20 16:24:58.711437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.961 [2024-11-20 16:24:58.711443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.961 [2024-11-20 16:24:58.711452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.961 [2024-11-20 16:24:58.723162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.961 [2024-11-20 16:24:58.723650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.961 [2024-11-20 16:24:58.723680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.961 [2024-11-20 16:24:58.723689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.961 [2024-11-20 16:24:58.723853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.961 [2024-11-20 16:24:58.724005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.961 [2024-11-20 16:24:58.724012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.962 [2024-11-20 16:24:58.724018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.962 [2024-11-20 16:24:58.724024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.962 [2024-11-20 16:24:58.735752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.962 [2024-11-20 16:24:58.736119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.962 [2024-11-20 16:24:58.736134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.962 [2024-11-20 16:24:58.736140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.962 [2024-11-20 16:24:58.736294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.962 [2024-11-20 16:24:58.736443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.962 [2024-11-20 16:24:58.736449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.962 [2024-11-20 16:24:58.736454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.962 [2024-11-20 16:24:58.736459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.962 [2024-11-20 16:24:58.748326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.962 [2024-11-20 16:24:58.748671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.962 [2024-11-20 16:24:58.748685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.962 [2024-11-20 16:24:58.748691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.962 [2024-11-20 16:24:58.748839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.962 [2024-11-20 16:24:58.748988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.962 [2024-11-20 16:24:58.748993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.962 [2024-11-20 16:24:58.748999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.962 [2024-11-20 16:24:58.749003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.962 [2024-11-20 16:24:58.760985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.962 [2024-11-20 16:24:58.761441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.962 [2024-11-20 16:24:58.761454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.962 [2024-11-20 16:24:58.761459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.962 [2024-11-20 16:24:58.761607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.962 [2024-11-20 16:24:58.761755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.962 [2024-11-20 16:24:58.761761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.962 [2024-11-20 16:24:58.761766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.962 [2024-11-20 16:24:58.761770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.962 [2024-11-20 16:24:58.773615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.962 [2024-11-20 16:24:58.774067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.962 [2024-11-20 16:24:58.774079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.962 [2024-11-20 16:24:58.774084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.962 [2024-11-20 16:24:58.774236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.962 [2024-11-20 16:24:58.774385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.962 [2024-11-20 16:24:58.774391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.962 [2024-11-20 16:24:58.774396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.962 [2024-11-20 16:24:58.774400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.962 [2024-11-20 16:24:58.786276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.962 [2024-11-20 16:24:58.786690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.962 [2024-11-20 16:24:58.786702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.962 [2024-11-20 16:24:58.786707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.962 [2024-11-20 16:24:58.786855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.962 [2024-11-20 16:24:58.787003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.962 [2024-11-20 16:24:58.787009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.962 [2024-11-20 16:24:58.787014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.962 [2024-11-20 16:24:58.787019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.962 [2024-11-20 16:24:58.798857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.962 [2024-11-20 16:24:58.799178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.962 [2024-11-20 16:24:58.799190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.962 [2024-11-20 16:24:58.799195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.962 [2024-11-20 16:24:58.799347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.962 [2024-11-20 16:24:58.799495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.962 [2024-11-20 16:24:58.799500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.962 [2024-11-20 16:24:58.799505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.962 [2024-11-20 16:24:58.799510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.962 [2024-11-20 16:24:58.811512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.962 [2024-11-20 16:24:58.811959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.962 [2024-11-20 16:24:58.811972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.962 [2024-11-20 16:24:58.811977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.962 [2024-11-20 16:24:58.812126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.962 [2024-11-20 16:24:58.812278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.962 [2024-11-20 16:24:58.812285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.962 [2024-11-20 16:24:58.812290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.963 [2024-11-20 16:24:58.812294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.963 [2024-11-20 16:24:58.824131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.963 [2024-11-20 16:24:58.824666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.963 [2024-11-20 16:24:58.824696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.963 [2024-11-20 16:24:58.824705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.963 [2024-11-20 16:24:58.824869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.963 [2024-11-20 16:24:58.825022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.963 [2024-11-20 16:24:58.825028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.963 [2024-11-20 16:24:58.825034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.963 [2024-11-20 16:24:58.825040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.963 [2024-11-20 16:24:58.836770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.963 [2024-11-20 16:24:58.837360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.963 [2024-11-20 16:24:58.837391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.963 [2024-11-20 16:24:58.837399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.963 [2024-11-20 16:24:58.837563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.963 [2024-11-20 16:24:58.837715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.963 [2024-11-20 16:24:58.837726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.963 [2024-11-20 16:24:58.837731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.963 [2024-11-20 16:24:58.837737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.963 [2024-11-20 16:24:58.849454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.963 [2024-11-20 16:24:58.849914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.963 [2024-11-20 16:24:58.849929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.963 [2024-11-20 16:24:58.849935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.963 [2024-11-20 16:24:58.850083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.963 [2024-11-20 16:24:58.850238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.963 [2024-11-20 16:24:58.850244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.963 [2024-11-20 16:24:58.850249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.963 [2024-11-20 16:24:58.850254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.963 [2024-11-20 16:24:58.862092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.963 [2024-11-20 16:24:58.862547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.963 [2024-11-20 16:24:58.862560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.963 [2024-11-20 16:24:58.862566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.963 [2024-11-20 16:24:58.862714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.963 [2024-11-20 16:24:58.862863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.963 [2024-11-20 16:24:58.862868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.963 [2024-11-20 16:24:58.862873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.963 [2024-11-20 16:24:58.862878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.963 [2024-11-20 16:24:58.874734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.963 [2024-11-20 16:24:58.875189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.963 [2024-11-20 16:24:58.875203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.963 [2024-11-20 16:24:58.875208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.963 [2024-11-20 16:24:58.875356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.963 [2024-11-20 16:24:58.875505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.963 [2024-11-20 16:24:58.875510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.963 [2024-11-20 16:24:58.875515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.963 [2024-11-20 16:24:58.875523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.963 [2024-11-20 16:24:58.887366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.963 [2024-11-20 16:24:58.887809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.963 [2024-11-20 16:24:58.887821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:22.963 [2024-11-20 16:24:58.887826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:22.963 [2024-11-20 16:24:58.887974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:22.963 [2024-11-20 16:24:58.888122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.963 [2024-11-20 16:24:58.888128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.963 [2024-11-20 16:24:58.888133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.963 [2024-11-20 16:24:58.888138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.225 [2024-11-20 16:24:58.899990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.225 [2024-11-20 16:24:58.900351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 16:24:58.900364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.226 [2024-11-20 16:24:58.900369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.226 [2024-11-20 16:24:58.900517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.226 [2024-11-20 16:24:58.900665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.226 [2024-11-20 16:24:58.900671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.226 [2024-11-20 16:24:58.900676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.226 [2024-11-20 16:24:58.900681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.226 [2024-11-20 16:24:58.912672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.226 [2024-11-20 16:24:58.913114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 16:24:58.913126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.226 [2024-11-20 16:24:58.913131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.226 [2024-11-20 16:24:58.913283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.226 [2024-11-20 16:24:58.913432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.226 [2024-11-20 16:24:58.913438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.226 [2024-11-20 16:24:58.913443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.226 [2024-11-20 16:24:58.913447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.226 [2024-11-20 16:24:58.925294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.226 [2024-11-20 16:24:58.925742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 16:24:58.925754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.226 [2024-11-20 16:24:58.925759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.226 [2024-11-20 16:24:58.925907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.226 [2024-11-20 16:24:58.926055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.226 [2024-11-20 16:24:58.926061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.226 [2024-11-20 16:24:58.926066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.226 [2024-11-20 16:24:58.926070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.226 [2024-11-20 16:24:58.937932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.226 [2024-11-20 16:24:58.938390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 16:24:58.938403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.226 [2024-11-20 16:24:58.938408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.226 [2024-11-20 16:24:58.938556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.226 [2024-11-20 16:24:58.938705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.226 [2024-11-20 16:24:58.938710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.226 [2024-11-20 16:24:58.938715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.226 [2024-11-20 16:24:58.938720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.226 [2024-11-20 16:24:58.950568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.226 [2024-11-20 16:24:58.951014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 16:24:58.951026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.226 [2024-11-20 16:24:58.951032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.226 [2024-11-20 16:24:58.951184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.226 [2024-11-20 16:24:58.951333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.226 [2024-11-20 16:24:58.951339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.226 [2024-11-20 16:24:58.951344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.226 [2024-11-20 16:24:58.951348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.226 [2024-11-20 16:24:58.963196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.226 [2024-11-20 16:24:58.963764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 16:24:58.963794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.226 [2024-11-20 16:24:58.963803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.226 [2024-11-20 16:24:58.963971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.226 [2024-11-20 16:24:58.964123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.226 [2024-11-20 16:24:58.964129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.226 [2024-11-20 16:24:58.964135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.226 [2024-11-20 16:24:58.964141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.226 [2024-11-20 16:24:58.975870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.226 [2024-11-20 16:24:58.976218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 16:24:58.976233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.226 [2024-11-20 16:24:58.976239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.226 [2024-11-20 16:24:58.976388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.226 [2024-11-20 16:24:58.976537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.226 [2024-11-20 16:24:58.976543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.226 [2024-11-20 16:24:58.976548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.226 [2024-11-20 16:24:58.976553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.226 [2024-11-20 16:24:58.988550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.226 [2024-11-20 16:24:58.988998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 16:24:58.989011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.226 [2024-11-20 16:24:58.989017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.226 [2024-11-20 16:24:58.989170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.226 [2024-11-20 16:24:58.989319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.226 [2024-11-20 16:24:58.989325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.226 [2024-11-20 16:24:58.989330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.226 [2024-11-20 16:24:58.989335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.226 [2024-11-20 16:24:59.001185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.226 [2024-11-20 16:24:59.001605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 16:24:59.001617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.226 [2024-11-20 16:24:59.001622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.226 [2024-11-20 16:24:59.001770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.226 [2024-11-20 16:24:59.001918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.226 [2024-11-20 16:24:59.001927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.226 [2024-11-20 16:24:59.001931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.226 [2024-11-20 16:24:59.001936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.226 [2024-11-20 16:24:59.013784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.226 [2024-11-20 16:24:59.014228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 16:24:59.014241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.226 [2024-11-20 16:24:59.014247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.226 [2024-11-20 16:24:59.014395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.226 [2024-11-20 16:24:59.014543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.226 [2024-11-20 16:24:59.014549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.226 [2024-11-20 16:24:59.014554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.226 [2024-11-20 16:24:59.014559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.226 [2024-11-20 16:24:59.026433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.227 [2024-11-20 16:24:59.026887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 16:24:59.026900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.227 [2024-11-20 16:24:59.026905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.227 [2024-11-20 16:24:59.027053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.227 [2024-11-20 16:24:59.027215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.227 [2024-11-20 16:24:59.027221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.227 [2024-11-20 16:24:59.027226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.227 [2024-11-20 16:24:59.027231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.227 [2024-11-20 16:24:59.039086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.227 [2024-11-20 16:24:59.039532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 16:24:59.039544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.227 [2024-11-20 16:24:59.039550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.227 [2024-11-20 16:24:59.039698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.227 [2024-11-20 16:24:59.039846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.227 [2024-11-20 16:24:59.039851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.227 [2024-11-20 16:24:59.039856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.227 [2024-11-20 16:24:59.039864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.227 [2024-11-20 16:24:59.051723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.227 [2024-11-20 16:24:59.052142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 16:24:59.052154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.227 [2024-11-20 16:24:59.052164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.227 [2024-11-20 16:24:59.052312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.227 [2024-11-20 16:24:59.052460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.227 [2024-11-20 16:24:59.052466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.227 [2024-11-20 16:24:59.052471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.227 [2024-11-20 16:24:59.052476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.227 [2024-11-20 16:24:59.064323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.227 [2024-11-20 16:24:59.064858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 16:24:59.064889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.227 [2024-11-20 16:24:59.064897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.227 [2024-11-20 16:24:59.065062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.227 [2024-11-20 16:24:59.065221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.227 [2024-11-20 16:24:59.065228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.227 [2024-11-20 16:24:59.065233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.227 [2024-11-20 16:24:59.065239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.227 [2024-11-20 16:24:59.076959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.227 [2024-11-20 16:24:59.077508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 16:24:59.077538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.227 [2024-11-20 16:24:59.077547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.227 [2024-11-20 16:24:59.077711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.227 [2024-11-20 16:24:59.077863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.227 [2024-11-20 16:24:59.077869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.227 [2024-11-20 16:24:59.077875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.227 [2024-11-20 16:24:59.077881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.227 [2024-11-20 16:24:59.089606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.227 [2024-11-20 16:24:59.090043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 16:24:59.090072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.227 [2024-11-20 16:24:59.090081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.227 [2024-11-20 16:24:59.090254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.227 [2024-11-20 16:24:59.090406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.227 [2024-11-20 16:24:59.090412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.227 [2024-11-20 16:24:59.090418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.227 [2024-11-20 16:24:59.090424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.227 [2024-11-20 16:24:59.102292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.227 [2024-11-20 16:24:59.102751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 16:24:59.102765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.227 [2024-11-20 16:24:59.102771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.227 [2024-11-20 16:24:59.102919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.227 [2024-11-20 16:24:59.103068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.227 [2024-11-20 16:24:59.103074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.227 [2024-11-20 16:24:59.103079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.227 [2024-11-20 16:24:59.103084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.227 [2024-11-20 16:24:59.114936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.227 [2024-11-20 16:24:59.115410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 16:24:59.115423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.227 [2024-11-20 16:24:59.115429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.227 [2024-11-20 16:24:59.115577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.227 [2024-11-20 16:24:59.115726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.227 [2024-11-20 16:24:59.115732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.227 [2024-11-20 16:24:59.115736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.227 [2024-11-20 16:24:59.115741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.227 [2024-11-20 16:24:59.127606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.227 [2024-11-20 16:24:59.128056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 16:24:59.128068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.227 [2024-11-20 16:24:59.128074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.227 [2024-11-20 16:24:59.128230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.227 [2024-11-20 16:24:59.128379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.227 [2024-11-20 16:24:59.128385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.227 [2024-11-20 16:24:59.128390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.227 [2024-11-20 16:24:59.128395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.227 [2024-11-20 16:24:59.140257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.227 [2024-11-20 16:24:59.140735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 16:24:59.140747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.227 [2024-11-20 16:24:59.140753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.227 [2024-11-20 16:24:59.140900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.227 [2024-11-20 16:24:59.141048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.227 [2024-11-20 16:24:59.141055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.227 [2024-11-20 16:24:59.141059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.227 [2024-11-20 16:24:59.141064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.228 [2024-11-20 16:24:59.152917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.228 [2024-11-20 16:24:59.153447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 16:24:59.153478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.228 [2024-11-20 16:24:59.153486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.228 [2024-11-20 16:24:59.153651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.228 [2024-11-20 16:24:59.153802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.228 [2024-11-20 16:24:59.153809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.228 [2024-11-20 16:24:59.153814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.228 [2024-11-20 16:24:59.153820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.491 [2024-11-20 16:24:59.165533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.491 [2024-11-20 16:24:59.165773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.491 [2024-11-20 16:24:59.165789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.491 [2024-11-20 16:24:59.165794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.491 [2024-11-20 16:24:59.165943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.491 [2024-11-20 16:24:59.166092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.491 [2024-11-20 16:24:59.166105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.491 [2024-11-20 16:24:59.166110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.491 [2024-11-20 16:24:59.166115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1465209 Killed "${NVMF_APP[@]}" "$@" 00:30:23.491 [2024-11-20 16:24:59.178114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.491 [2024-11-20 16:24:59.178462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.491 [2024-11-20 16:24:59.178476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.491 [2024-11-20 16:24:59.178482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.491 [2024-11-20 16:24:59.178630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.491 [2024-11-20 16:24:59.178778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.491 [2024-11-20 16:24:59.178785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.491 [2024-11-20 16:24:59.178790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.491 [2024-11-20 16:24:59.178795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.491 16:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:23.491 16:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:23.491 16:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:23.491 16:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:23.491 16:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:23.491 16:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1466816 00:30:23.491 16:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1466816 00:30:23.491 16:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:23.491 16:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1466816 ']' 00:30:23.491 16:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.491 16:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:23.491 16:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.491 16:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:23.491 16:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:23.491 [2024-11-20 16:24:59.190792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.491 [2024-11-20 16:24:59.191125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.491 [2024-11-20 16:24:59.191138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.491 [2024-11-20 16:24:59.191143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.491 [2024-11-20 16:24:59.191297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.491 [2024-11-20 16:24:59.191449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.491 [2024-11-20 16:24:59.191455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.491 [2024-11-20 16:24:59.191460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.491 [2024-11-20 16:24:59.191465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.491 [2024-11-20 16:24:59.203463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.491 [2024-11-20 16:24:59.203868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.491 [2024-11-20 16:24:59.203880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.491 [2024-11-20 16:24:59.203886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.491 [2024-11-20 16:24:59.204034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.491 [2024-11-20 16:24:59.204189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.491 [2024-11-20 16:24:59.204195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.491 [2024-11-20 16:24:59.204200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.491 [2024-11-20 16:24:59.204205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.491 [2024-11-20 16:24:59.216060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.491 [2024-11-20 16:24:59.216553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.491 [2024-11-20 16:24:59.216566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.491 [2024-11-20 16:24:59.216571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.491 [2024-11-20 16:24:59.216720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.491 [2024-11-20 16:24:59.216868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.491 [2024-11-20 16:24:59.216874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.491 [2024-11-20 16:24:59.216879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.491 [2024-11-20 16:24:59.216884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.491 [2024-11-20 16:24:59.228771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.491 [2024-11-20 16:24:59.229223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.491 [2024-11-20 16:24:59.229238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.491 [2024-11-20 16:24:59.229243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.491 [2024-11-20 16:24:59.229392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.491 [2024-11-20 16:24:59.229540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.491 [2024-11-20 16:24:59.229546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.491 [2024-11-20 16:24:59.229555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.491 [2024-11-20 16:24:59.229560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.491 [2024-11-20 16:24:59.240970] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:30:23.491 [2024-11-20 16:24:59.241016] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:23.491 [2024-11-20 16:24:59.241417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.491 [2024-11-20 16:24:59.241669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.491 [2024-11-20 16:24:59.241680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.491 [2024-11-20 16:24:59.241686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.491 [2024-11-20 16:24:59.241834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.491 [2024-11-20 16:24:59.241982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.491 [2024-11-20 16:24:59.241988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.491 [2024-11-20 16:24:59.241993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.491 [2024-11-20 16:24:59.241998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.491 [2024-11-20 16:24:59.253995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.491 [2024-11-20 16:24:59.254579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.491 [2024-11-20 16:24:59.254610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.491 [2024-11-20 16:24:59.254619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.491 [2024-11-20 16:24:59.254787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.491 [2024-11-20 16:24:59.254938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.491 [2024-11-20 16:24:59.254946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.491 [2024-11-20 16:24:59.254952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.491 [2024-11-20 16:24:59.254958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.491 [2024-11-20 16:24:59.266674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.491 [2024-11-20 16:24:59.267123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.491 [2024-11-20 16:24:59.267138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.491 [2024-11-20 16:24:59.267143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.491 [2024-11-20 16:24:59.267297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.491 [2024-11-20 16:24:59.267447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.491 [2024-11-20 16:24:59.267452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.491 [2024-11-20 16:24:59.267461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.491 [2024-11-20 16:24:59.267466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.491 [2024-11-20 16:24:59.279252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.491 [2024-11-20 16:24:59.279837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.491 [2024-11-20 16:24:59.279868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.492 [2024-11-20 16:24:59.279877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.492 [2024-11-20 16:24:59.280042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.492 [2024-11-20 16:24:59.280199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.492 [2024-11-20 16:24:59.280206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.492 [2024-11-20 16:24:59.280212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.492 [2024-11-20 16:24:59.280218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.492 [2024-11-20 16:24:59.291921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.492 [2024-11-20 16:24:59.292430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.492 [2024-11-20 16:24:59.292445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.492 [2024-11-20 16:24:59.292451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.492 [2024-11-20 16:24:59.292599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.492 [2024-11-20 16:24:59.292748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.492 [2024-11-20 16:24:59.292754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.492 [2024-11-20 16:24:59.292759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.492 [2024-11-20 16:24:59.292764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.492 [2024-11-20 16:24:59.304611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.492 [2024-11-20 16:24:59.305070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.492 [2024-11-20 16:24:59.305083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.492 [2024-11-20 16:24:59.305088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.492 [2024-11-20 16:24:59.305241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.492 [2024-11-20 16:24:59.305390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.492 [2024-11-20 16:24:59.305396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.492 [2024-11-20 16:24:59.305401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.492 [2024-11-20 16:24:59.305406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.492 [2024-11-20 16:24:59.317252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.492 [2024-11-20 16:24:59.317678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.492 [2024-11-20 16:24:59.317708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.492 [2024-11-20 16:24:59.317717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.492 [2024-11-20 16:24:59.317882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.492 [2024-11-20 16:24:59.318033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.492 [2024-11-20 16:24:59.318040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.492 [2024-11-20 16:24:59.318046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.492 [2024-11-20 16:24:59.318052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.492 [2024-11-20 16:24:59.329912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.492 [2024-11-20 16:24:59.330039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:23.492 [2024-11-20 16:24:59.330556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.492 [2024-11-20 16:24:59.330586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.492 [2024-11-20 16:24:59.330596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.492 [2024-11-20 16:24:59.330761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.492 [2024-11-20 16:24:59.330913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.492 [2024-11-20 16:24:59.330920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.492 [2024-11-20 16:24:59.330925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.492 [2024-11-20 16:24:59.330931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.492 [2024-11-20 16:24:59.342511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.492 [2024-11-20 16:24:59.343101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.492 [2024-11-20 16:24:59.343132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.492 [2024-11-20 16:24:59.343141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.492 [2024-11-20 16:24:59.343317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.492 [2024-11-20 16:24:59.343469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.492 [2024-11-20 16:24:59.343476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.492 [2024-11-20 16:24:59.343482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.492 [2024-11-20 16:24:59.343488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.492 [2024-11-20 16:24:59.355206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.492 [2024-11-20 16:24:59.355778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.492 [2024-11-20 16:24:59.355814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.492 [2024-11-20 16:24:59.355823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.492 [2024-11-20 16:24:59.355988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.492 [2024-11-20 16:24:59.356140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.492 [2024-11-20 16:24:59.356146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.492 [2024-11-20 16:24:59.356152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.492 [2024-11-20 16:24:59.356163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.492 [2024-11-20 16:24:59.359509] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:23.492 [2024-11-20 16:24:59.359531] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:23.492 [2024-11-20 16:24:59.359538] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:23.492 [2024-11-20 16:24:59.359543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:23.492 [2024-11-20 16:24:59.359548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:23.492 [2024-11-20 16:24:59.360778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:23.492 [2024-11-20 16:24:59.360934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:23.492 [2024-11-20 16:24:59.360937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:23.492 [2024-11-20 16:24:59.367879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.492 [2024-11-20 16:24:59.368383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.492 [2024-11-20 16:24:59.368400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.492 [2024-11-20 16:24:59.368406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.492 [2024-11-20 16:24:59.368555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.492 [2024-11-20 16:24:59.368705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.492 [2024-11-20 16:24:59.368712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.492 [2024-11-20 16:24:59.368717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.492 [2024-11-20 16:24:59.368722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.492 [2024-11-20 16:24:59.380594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.492 [2024-11-20 16:24:59.381175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.492 [2024-11-20 16:24:59.381207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.492 [2024-11-20 16:24:59.381216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.492 [2024-11-20 16:24:59.381385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.492 [2024-11-20 16:24:59.381537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.492 [2024-11-20 16:24:59.381543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.492 [2024-11-20 16:24:59.381554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.492 [2024-11-20 16:24:59.381560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.492 [2024-11-20 16:24:59.393281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.492 [2024-11-20 16:24:59.393860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.492 [2024-11-20 16:24:59.393893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.492 [2024-11-20 16:24:59.393902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.492 [2024-11-20 16:24:59.394069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.492 [2024-11-20 16:24:59.394227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.492 [2024-11-20 16:24:59.394234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.492 [2024-11-20 16:24:59.394239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.492 [2024-11-20 16:24:59.394246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.492 [2024-11-20 16:24:59.405950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.492 [2024-11-20 16:24:59.406531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.492 [2024-11-20 16:24:59.406562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.492 [2024-11-20 16:24:59.406571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.492 [2024-11-20 16:24:59.406737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.492 [2024-11-20 16:24:59.406888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.492 [2024-11-20 16:24:59.406895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.492 [2024-11-20 16:24:59.406901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.492 [2024-11-20 16:24:59.406907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.492 [2024-11-20 16:24:59.418616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.492 [2024-11-20 16:24:59.419072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.492 [2024-11-20 16:24:59.419087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.492 [2024-11-20 16:24:59.419093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.492 [2024-11-20 16:24:59.419246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.492 [2024-11-20 16:24:59.419395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.493 [2024-11-20 16:24:59.419401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.493 [2024-11-20 16:24:59.419406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.493 [2024-11-20 16:24:59.419412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.755 [2024-11-20 16:24:59.431268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.755 [2024-11-20 16:24:59.431754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.755 [2024-11-20 16:24:59.431767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.755 [2024-11-20 16:24:59.431772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.755 [2024-11-20 16:24:59.431921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.755 [2024-11-20 16:24:59.432070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.755 [2024-11-20 16:24:59.432076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.755 [2024-11-20 16:24:59.432081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.755 [2024-11-20 16:24:59.432086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.755 [2024-11-20 16:24:59.443968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.755 [2024-11-20 16:24:59.444439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.755 [2024-11-20 16:24:59.444453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.755 [2024-11-20 16:24:59.444459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.755 [2024-11-20 16:24:59.444607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.755 [2024-11-20 16:24:59.444755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.755 [2024-11-20 16:24:59.444761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.755 [2024-11-20 16:24:59.444766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.755 [2024-11-20 16:24:59.444771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.756 [2024-11-20 16:24:59.456601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.756 [2024-11-20 16:24:59.457151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.756 [2024-11-20 16:24:59.457190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.756 [2024-11-20 16:24:59.457199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.756 [2024-11-20 16:24:59.457365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.756 [2024-11-20 16:24:59.457517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.756 [2024-11-20 16:24:59.457524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.756 [2024-11-20 16:24:59.457529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.756 [2024-11-20 16:24:59.457535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.756 [2024-11-20 16:24:59.469236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.756 [2024-11-20 16:24:59.469762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.756 [2024-11-20 16:24:59.469796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.756 [2024-11-20 16:24:59.469804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.756 [2024-11-20 16:24:59.469969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.756 [2024-11-20 16:24:59.470121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.756 [2024-11-20 16:24:59.470127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.756 [2024-11-20 16:24:59.470132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.756 [2024-11-20 16:24:59.470138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.756 [2024-11-20 16:24:59.481853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.756 [2024-11-20 16:24:59.482475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.756 [2024-11-20 16:24:59.482506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.756 [2024-11-20 16:24:59.482514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.756 [2024-11-20 16:24:59.482679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.756 [2024-11-20 16:24:59.482831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.756 [2024-11-20 16:24:59.482838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.756 [2024-11-20 16:24:59.482844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.756 [2024-11-20 16:24:59.482850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.756 [2024-11-20 16:24:59.494548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.756 [2024-11-20 16:24:59.495039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.756 [2024-11-20 16:24:59.495055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.756 [2024-11-20 16:24:59.495061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.756 [2024-11-20 16:24:59.495214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.756 [2024-11-20 16:24:59.495364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.756 [2024-11-20 16:24:59.495370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.756 [2024-11-20 16:24:59.495375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.756 [2024-11-20 16:24:59.495380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.756 [2024-11-20 16:24:59.507214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.756 [2024-11-20 16:24:59.507759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.756 [2024-11-20 16:24:59.507790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.756 [2024-11-20 16:24:59.507798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.756 [2024-11-20 16:24:59.507966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.756 [2024-11-20 16:24:59.508118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.756 [2024-11-20 16:24:59.508124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.756 [2024-11-20 16:24:59.508130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.756 [2024-11-20 16:24:59.508136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.756 [2024-11-20 16:24:59.519843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.756 [2024-11-20 16:24:59.520444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.756 [2024-11-20 16:24:59.520474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.756 [2024-11-20 16:24:59.520483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.756 [2024-11-20 16:24:59.520648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.756 [2024-11-20 16:24:59.520800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.756 [2024-11-20 16:24:59.520807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.756 [2024-11-20 16:24:59.520812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.756 [2024-11-20 16:24:59.520818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.756 [2024-11-20 16:24:59.532417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.756 [2024-11-20 16:24:59.532975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.756 [2024-11-20 16:24:59.533005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.756 [2024-11-20 16:24:59.533014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.756 [2024-11-20 16:24:59.533187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.756 [2024-11-20 16:24:59.533339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.756 [2024-11-20 16:24:59.533346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.756 [2024-11-20 16:24:59.533351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.756 [2024-11-20 16:24:59.533357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.756 [2024-11-20 16:24:59.545057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.756 [2024-11-20 16:24:59.545572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.756 [2024-11-20 16:24:59.545587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.756 [2024-11-20 16:24:59.545593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.756 [2024-11-20 16:24:59.545741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.756 [2024-11-20 16:24:59.545889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.756 [2024-11-20 16:24:59.545902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.756 [2024-11-20 16:24:59.545911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.756 [2024-11-20 16:24:59.545917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.756 [2024-11-20 16:24:59.557749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.756 [2024-11-20 16:24:59.558264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.756 [2024-11-20 16:24:59.558295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.756 [2024-11-20 16:24:59.558303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.756 [2024-11-20 16:24:59.558470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.756 [2024-11-20 16:24:59.558622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.756 [2024-11-20 16:24:59.558628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.756 [2024-11-20 16:24:59.558634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.756 [2024-11-20 16:24:59.558640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.756 [2024-11-20 16:24:59.570346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.756 [2024-11-20 16:24:59.570897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.756 [2024-11-20 16:24:59.570928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.756 [2024-11-20 16:24:59.570936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.756 [2024-11-20 16:24:59.571101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.756 [2024-11-20 16:24:59.571258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.756 [2024-11-20 16:24:59.571265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.757 [2024-11-20 16:24:59.571271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.757 [2024-11-20 16:24:59.571276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.757 [2024-11-20 16:24:59.582986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.757 [2024-11-20 16:24:59.583575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.757 [2024-11-20 16:24:59.583606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.757 [2024-11-20 16:24:59.583615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.757 [2024-11-20 16:24:59.583779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.757 [2024-11-20 16:24:59.583931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.757 [2024-11-20 16:24:59.583937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.757 [2024-11-20 16:24:59.583942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.757 [2024-11-20 16:24:59.583948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.757 [2024-11-20 16:24:59.595649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.757 [2024-11-20 16:24:59.596107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.757 [2024-11-20 16:24:59.596122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.757 [2024-11-20 16:24:59.596128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.757 [2024-11-20 16:24:59.596281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.757 [2024-11-20 16:24:59.596430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.757 [2024-11-20 16:24:59.596435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.757 [2024-11-20 16:24:59.596440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.757 [2024-11-20 16:24:59.596445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.757 [2024-11-20 16:24:59.608270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.757 [2024-11-20 16:24:59.608833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.757 [2024-11-20 16:24:59.608863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.757 [2024-11-20 16:24:59.608872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.757 [2024-11-20 16:24:59.609036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.757 [2024-11-20 16:24:59.609194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.757 [2024-11-20 16:24:59.609201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.757 [2024-11-20 16:24:59.609207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.757 [2024-11-20 16:24:59.609213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.757 [2024-11-20 16:24:59.620907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.757 [2024-11-20 16:24:59.621378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.757 [2024-11-20 16:24:59.621393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.757 [2024-11-20 16:24:59.621399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.757 [2024-11-20 16:24:59.621547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.757 [2024-11-20 16:24:59.621696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.757 [2024-11-20 16:24:59.621702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.757 [2024-11-20 16:24:59.621706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.757 [2024-11-20 16:24:59.621711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.757 [2024-11-20 16:24:59.633545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.757 [2024-11-20 16:24:59.634022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.757 [2024-11-20 16:24:59.634056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.757 [2024-11-20 16:24:59.634065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.757 [2024-11-20 16:24:59.634235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.757 [2024-11-20 16:24:59.634387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.757 [2024-11-20 16:24:59.634394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.757 [2024-11-20 16:24:59.634399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.757 [2024-11-20 16:24:59.634405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.757 [2024-11-20 16:24:59.646127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.757 [2024-11-20 16:24:59.646596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.757 [2024-11-20 16:24:59.646611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.757 [2024-11-20 16:24:59.646617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.757 [2024-11-20 16:24:59.646766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.757 [2024-11-20 16:24:59.646915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.757 [2024-11-20 16:24:59.646921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.757 [2024-11-20 16:24:59.646925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.757 [2024-11-20 16:24:59.646930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.757 [2024-11-20 16:24:59.658764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.757 [2024-11-20 16:24:59.659081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.757 [2024-11-20 16:24:59.659095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.757 [2024-11-20 16:24:59.659100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.757 [2024-11-20 16:24:59.659253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.757 [2024-11-20 16:24:59.659401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.757 [2024-11-20 16:24:59.659407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.757 [2024-11-20 16:24:59.659412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.757 [2024-11-20 16:24:59.659417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.757 [2024-11-20 16:24:59.672513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.757 4716.33 IOPS, 18.42 MiB/s [2024-11-20T15:24:59.693Z] [2024-11-20 16:24:59.672976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.757 [2024-11-20 16:24:59.672988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.757 [2024-11-20 16:24:59.672993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.757 [2024-11-20 16:24:59.673144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.757 [2024-11-20 16:24:59.673383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.757 [2024-11-20 16:24:59.673391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.757 [2024-11-20 16:24:59.673397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.757 [2024-11-20 16:24:59.673402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.757 [2024-11-20 16:24:59.685100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.757 [2024-11-20 16:24:59.685554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.757 [2024-11-20 16:24:59.685567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:23.757 [2024-11-20 16:24:59.685573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:23.757 [2024-11-20 16:24:59.685721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:23.757 [2024-11-20 16:24:59.685869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.757 [2024-11-20 16:24:59.685875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.757 [2024-11-20 16:24:59.685880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.757 [2024-11-20 16:24:59.685884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.019 [2024-11-20 16:24:59.697720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.019 [2024-11-20 16:24:59.698187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 16:24:59.698200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.020 [2024-11-20 16:24:59.698205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.020 [2024-11-20 16:24:59.698353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.020 [2024-11-20 16:24:59.698502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.020 [2024-11-20 16:24:59.698507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.020 [2024-11-20 16:24:59.698512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.020 [2024-11-20 16:24:59.698517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.020 [2024-11-20 16:24:59.710343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.020 [2024-11-20 16:24:59.710790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 16:24:59.710802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.020 [2024-11-20 16:24:59.710808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.020 [2024-11-20 16:24:59.710956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.020 [2024-11-20 16:24:59.711104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.020 [2024-11-20 16:24:59.711113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.020 [2024-11-20 16:24:59.711118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.020 [2024-11-20 16:24:59.711123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.020 [2024-11-20 16:24:59.722949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.020 [2024-11-20 16:24:59.723425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 16:24:59.723456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.020 [2024-11-20 16:24:59.723464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.020 [2024-11-20 16:24:59.723631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.020 [2024-11-20 16:24:59.723782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.020 [2024-11-20 16:24:59.723789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.020 [2024-11-20 16:24:59.723794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.020 [2024-11-20 16:24:59.723800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.020 [2024-11-20 16:24:59.735645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.020 [2024-11-20 16:24:59.736096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 16:24:59.736111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.020 [2024-11-20 16:24:59.736116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.020 [2024-11-20 16:24:59.736268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.020 [2024-11-20 16:24:59.736417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.020 [2024-11-20 16:24:59.736423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.020 [2024-11-20 16:24:59.736429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.020 [2024-11-20 16:24:59.736434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.020 [2024-11-20 16:24:59.748295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.020 [2024-11-20 16:24:59.748727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 16:24:59.748740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.020 [2024-11-20 16:24:59.748745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.020 [2024-11-20 16:24:59.748893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.020 [2024-11-20 16:24:59.749040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.020 [2024-11-20 16:24:59.749046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.020 [2024-11-20 16:24:59.749051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.020 [2024-11-20 16:24:59.749056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.020 [2024-11-20 16:24:59.760894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.020 [2024-11-20 16:24:59.761473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 16:24:59.761504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.020 [2024-11-20 16:24:59.761513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.020 [2024-11-20 16:24:59.761678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.020 [2024-11-20 16:24:59.761829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.020 [2024-11-20 16:24:59.761836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.020 [2024-11-20 16:24:59.761841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.020 [2024-11-20 16:24:59.761847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.020 [2024-11-20 16:24:59.773563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.020 [2024-11-20 16:24:59.774114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 16:24:59.774144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.020 [2024-11-20 16:24:59.774153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.020 [2024-11-20 16:24:59.774324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.020 [2024-11-20 16:24:59.774476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.020 [2024-11-20 16:24:59.774483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.020 [2024-11-20 16:24:59.774488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.020 [2024-11-20 16:24:59.774494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.020 [2024-11-20 16:24:59.786193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.020 [2024-11-20 16:24:59.786615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 16:24:59.786646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.020 [2024-11-20 16:24:59.786654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.020 [2024-11-20 16:24:59.786818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.020 [2024-11-20 16:24:59.786970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.020 [2024-11-20 16:24:59.786976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.020 [2024-11-20 16:24:59.786981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.020 [2024-11-20 16:24:59.786987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.020 [2024-11-20 16:24:59.798827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.020 [2024-11-20 16:24:59.799445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 16:24:59.799478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.020 [2024-11-20 16:24:59.799487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.020 [2024-11-20 16:24:59.799651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.020 [2024-11-20 16:24:59.799803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.020 [2024-11-20 16:24:59.799809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.020 [2024-11-20 16:24:59.799815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.020 [2024-11-20 16:24:59.799821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.020 [2024-11-20 16:24:59.811522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.020 [2024-11-20 16:24:59.812081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 16:24:59.812111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.020 [2024-11-20 16:24:59.812120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.020 [2024-11-20 16:24:59.812291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.020 [2024-11-20 16:24:59.812444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.020 [2024-11-20 16:24:59.812450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.020 [2024-11-20 16:24:59.812455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.020 [2024-11-20 16:24:59.812461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.020 [2024-11-20 16:24:59.824160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.021 [2024-11-20 16:24:59.824689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 16:24:59.824719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.021 [2024-11-20 16:24:59.824727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.021 [2024-11-20 16:24:59.824892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.021 [2024-11-20 16:24:59.825044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.021 [2024-11-20 16:24:59.825050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.021 [2024-11-20 16:24:59.825055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.021 [2024-11-20 16:24:59.825061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.021 [2024-11-20 16:24:59.836770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.021 [2024-11-20 16:24:59.837139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 16:24:59.837155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.021 [2024-11-20 16:24:59.837165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.021 [2024-11-20 16:24:59.837319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.021 [2024-11-20 16:24:59.837468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.021 [2024-11-20 16:24:59.837474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.021 [2024-11-20 16:24:59.837479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.021 [2024-11-20 16:24:59.837484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.021 [2024-11-20 16:24:59.849459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.021 [2024-11-20 16:24:59.849913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 16:24:59.849926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.021 [2024-11-20 16:24:59.849931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.021 [2024-11-20 16:24:59.850079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.021 [2024-11-20 16:24:59.850250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.021 [2024-11-20 16:24:59.850257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.021 [2024-11-20 16:24:59.850262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.021 [2024-11-20 16:24:59.850267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.021 [2024-11-20 16:24:59.862100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.021 [2024-11-20 16:24:59.862659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 16:24:59.862689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.021 [2024-11-20 16:24:59.862698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.021 [2024-11-20 16:24:59.862863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.021 [2024-11-20 16:24:59.863014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.021 [2024-11-20 16:24:59.863021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.021 [2024-11-20 16:24:59.863026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.021 [2024-11-20 16:24:59.863032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.021 [2024-11-20 16:24:59.874753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.021 [2024-11-20 16:24:59.875283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 16:24:59.875313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.021 [2024-11-20 16:24:59.875322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.021 [2024-11-20 16:24:59.875489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.021 [2024-11-20 16:24:59.875640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.021 [2024-11-20 16:24:59.875650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.021 [2024-11-20 16:24:59.875656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.021 [2024-11-20 16:24:59.875662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.021 [2024-11-20 16:24:59.887375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.021 [2024-11-20 16:24:59.887903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 16:24:59.887933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.021 [2024-11-20 16:24:59.887942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.021 [2024-11-20 16:24:59.888108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.021 [2024-11-20 16:24:59.888266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.021 [2024-11-20 16:24:59.888273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.021 [2024-11-20 16:24:59.888278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.021 [2024-11-20 16:24:59.888284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.021 [2024-11-20 16:24:59.899987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.021 [2024-11-20 16:24:59.900500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 16:24:59.900516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.021 [2024-11-20 16:24:59.900521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.021 [2024-11-20 16:24:59.900670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.021 [2024-11-20 16:24:59.900819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.021 [2024-11-20 16:24:59.900825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.021 [2024-11-20 16:24:59.900830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.021 [2024-11-20 16:24:59.900834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.021 [2024-11-20 16:24:59.912681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.021 [2024-11-20 16:24:59.913140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 16:24:59.913152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.021 [2024-11-20 16:24:59.913161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.021 [2024-11-20 16:24:59.913309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.021 [2024-11-20 16:24:59.913457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.021 [2024-11-20 16:24:59.913463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.021 [2024-11-20 16:24:59.913469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.021 [2024-11-20 16:24:59.913473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.021 [2024-11-20 16:24:59.925322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.021 [2024-11-20 16:24:59.925823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 16:24:59.925836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.021 [2024-11-20 16:24:59.925841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.021 [2024-11-20 16:24:59.925989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.021 [2024-11-20 16:24:59.926137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.021 [2024-11-20 16:24:59.926152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.021 [2024-11-20 16:24:59.926162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.021 [2024-11-20 16:24:59.926168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.021 [2024-11-20 16:24:59.938021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.021 [2024-11-20 16:24:59.938460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 16:24:59.938473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.021 [2024-11-20 16:24:59.938478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.021 [2024-11-20 16:24:59.938626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.021 [2024-11-20 16:24:59.938774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.021 [2024-11-20 16:24:59.938780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.021 [2024-11-20 16:24:59.938786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.021 [2024-11-20 16:24:59.938791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.022 [2024-11-20 16:24:59.950633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.022 [2024-11-20 16:24:59.951092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 16:24:59.951105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.022 [2024-11-20 16:24:59.951110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.022 [2024-11-20 16:24:59.951263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.022 [2024-11-20 16:24:59.951412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.022 [2024-11-20 16:24:59.951418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.022 [2024-11-20 16:24:59.951423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.022 [2024-11-20 16:24:59.951427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.284 [2024-11-20 16:24:59.963265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.284 [2024-11-20 16:24:59.963791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.284 [2024-11-20 16:24:59.963827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.284 [2024-11-20 16:24:59.963835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.284 [2024-11-20 16:24:59.964000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.285 [2024-11-20 16:24:59.964152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.285 [2024-11-20 16:24:59.964168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.285 [2024-11-20 16:24:59.964174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.285 [2024-11-20 16:24:59.964180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.285 [2024-11-20 16:24:59.975889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.285 [2024-11-20 16:24:59.976441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.285 [2024-11-20 16:24:59.976471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.285 [2024-11-20 16:24:59.976480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.285 [2024-11-20 16:24:59.976644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.285 [2024-11-20 16:24:59.976796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.285 [2024-11-20 16:24:59.976803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.285 [2024-11-20 16:24:59.976809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.285 [2024-11-20 16:24:59.976814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.285 [2024-11-20 16:24:59.988524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.285 [2024-11-20 16:24:59.988978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.285 [2024-11-20 16:24:59.988993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.285 [2024-11-20 16:24:59.988999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.285 [2024-11-20 16:24:59.989148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.285 [2024-11-20 16:24:59.989301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.285 [2024-11-20 16:24:59.989308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.285 [2024-11-20 16:24:59.989313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.285 [2024-11-20 16:24:59.989318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.285 [2024-11-20 16:25:00.001218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.285 [2024-11-20 16:25:00.002255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.285 [2024-11-20 16:25:00.002285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.285 [2024-11-20 16:25:00.002294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.285 [2024-11-20 16:25:00.002463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.285 [2024-11-20 16:25:00.002617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.285 [2024-11-20 16:25:00.002625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.285 [2024-11-20 16:25:00.002631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.285 [2024-11-20 16:25:00.002637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.285 [2024-11-20 16:25:00.013789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.285 [2024-11-20 16:25:00.014481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.285 [2024-11-20 16:25:00.014512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.285 [2024-11-20 16:25:00.014521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.285 [2024-11-20 16:25:00.014685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.285 [2024-11-20 16:25:00.014838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.285 [2024-11-20 16:25:00.014844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.285 [2024-11-20 16:25:00.014850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.285 [2024-11-20 16:25:00.014855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.285 [2024-11-20 16:25:00.026436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.285 [2024-11-20 16:25:00.027001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.285 [2024-11-20 16:25:00.027032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.285 [2024-11-20 16:25:00.027041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.285 [2024-11-20 16:25:00.027221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.285 [2024-11-20 16:25:00.027374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.285 [2024-11-20 16:25:00.027381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.285 [2024-11-20 16:25:00.027387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.285 [2024-11-20 16:25:00.027393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.285 [2024-11-20 16:25:00.039101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.285 [2024-11-20 16:25:00.039708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.285 [2024-11-20 16:25:00.039739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.285 [2024-11-20 16:25:00.039748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.285 [2024-11-20 16:25:00.039913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.285 [2024-11-20 16:25:00.040065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.285 [2024-11-20 16:25:00.040075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.285 [2024-11-20 16:25:00.040081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.285 [2024-11-20 16:25:00.040087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.285 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:24.285 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:24.285 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:24.285 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:24.285 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:24.285 [2024-11-20 16:25:00.051803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.285 [2024-11-20 16:25:00.052437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.285 [2024-11-20 16:25:00.052468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.285 [2024-11-20 16:25:00.052477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.285 [2024-11-20 16:25:00.052642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.285 [2024-11-20 16:25:00.052794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.285 [2024-11-20 16:25:00.052801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.285 [2024-11-20 16:25:00.052806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.285 [2024-11-20 16:25:00.052812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.285 [2024-11-20 16:25:00.064409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.285 [2024-11-20 16:25:00.064884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.285 [2024-11-20 16:25:00.064900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.285 [2024-11-20 16:25:00.064906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.285 [2024-11-20 16:25:00.065054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.286 [2024-11-20 16:25:00.065208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.286 [2024-11-20 16:25:00.065214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.286 [2024-11-20 16:25:00.065220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.286 [2024-11-20 16:25:00.065225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.286 [2024-11-20 16:25:00.077088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.286 [2024-11-20 16:25:00.077528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.286 [2024-11-20 16:25:00.077542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.286 [2024-11-20 16:25:00.077548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.286 [2024-11-20 16:25:00.077696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.286 [2024-11-20 16:25:00.077850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.286 [2024-11-20 16:25:00.077857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.286 [2024-11-20 16:25:00.077862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.286 [2024-11-20 16:25:00.077867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.286 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:24.286 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:24.286 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.286 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:24.286 [2024-11-20 16:25:00.088441] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:24.286 [2024-11-20 16:25:00.089724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.286 [2024-11-20 16:25:00.090285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.286 [2024-11-20 16:25:00.090316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.286 [2024-11-20 16:25:00.090325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.286 [2024-11-20 16:25:00.090490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.286 [2024-11-20 16:25:00.090642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.286 [2024-11-20 16:25:00.090649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.286 [2024-11-20 16:25:00.090654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.286 [2024-11-20 16:25:00.090660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.286 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.286 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:24.286 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.286 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:24.286 [2024-11-20 16:25:00.102381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.286 [2024-11-20 16:25:00.102854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.286 [2024-11-20 16:25:00.102869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.286 [2024-11-20 16:25:00.102875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.286 [2024-11-20 16:25:00.103024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.286 [2024-11-20 16:25:00.103177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.286 [2024-11-20 16:25:00.103184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.286 [2024-11-20 16:25:00.103189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.286 [2024-11-20 16:25:00.103194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.286 [2024-11-20 16:25:00.115026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.286 [2024-11-20 16:25:00.115638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.286 [2024-11-20 16:25:00.115668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.286 [2024-11-20 16:25:00.115677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.286 [2024-11-20 16:25:00.115842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.286 [2024-11-20 16:25:00.115994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.286 [2024-11-20 16:25:00.116001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.286 [2024-11-20 16:25:00.116007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.286 [2024-11-20 16:25:00.116013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.286 Malloc0 00:30:24.286 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.286 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:24.286 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.286 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:24.286 [2024-11-20 16:25:00.127739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.286 [2024-11-20 16:25:00.128259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.286 [2024-11-20 16:25:00.128290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.286 [2024-11-20 16:25:00.128299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.286 [2024-11-20 16:25:00.128466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.286 [2024-11-20 16:25:00.128618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.286 [2024-11-20 16:25:00.128625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.286 [2024-11-20 16:25:00.128631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.286 [2024-11-20 16:25:00.128637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.286 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.286 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:24.286 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.286 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:24.286 [2024-11-20 16:25:00.140355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.286 [2024-11-20 16:25:00.140825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.286 [2024-11-20 16:25:00.140840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.286 [2024-11-20 16:25:00.140846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.286 [2024-11-20 16:25:00.140995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.286 [2024-11-20 16:25:00.141143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.286 [2024-11-20 16:25:00.141153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.286 [2024-11-20 16:25:00.141164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.286 [2024-11-20 16:25:00.141169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.286 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.286 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:24.286 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.286 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:24.286 [2024-11-20 16:25:00.153008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.287 [2024-11-20 16:25:00.153493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.287 [2024-11-20 16:25:00.153524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20db000 with addr=10.0.0.2, port=4420 00:30:24.287 [2024-11-20 16:25:00.153533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db000 is same with the state(6) to be set 00:30:24.287 [2024-11-20 16:25:00.153698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db000 (9): Bad file descriptor 00:30:24.287 [2024-11-20 16:25:00.153849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.287 [2024-11-20 16:25:00.153856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.287 [2024-11-20 16:25:00.153862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.287 [2024-11-20 16:25:00.153867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.287 [2024-11-20 16:25:00.154924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:24.287 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.287 16:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1465585 00:30:24.287 [2024-11-20 16:25:00.165586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.547 [2024-11-20 16:25:00.234087] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:30:25.748 4873.57 IOPS, 19.04 MiB/s [2024-11-20T15:25:03.068Z] 5881.75 IOPS, 22.98 MiB/s [2024-11-20T15:25:04.008Z] 6664.67 IOPS, 26.03 MiB/s [2024-11-20T15:25:04.950Z] 7299.30 IOPS, 28.51 MiB/s [2024-11-20T15:25:05.892Z] 7811.00 IOPS, 30.51 MiB/s [2024-11-20T15:25:06.901Z] 8251.00 IOPS, 32.23 MiB/s [2024-11-20T15:25:07.845Z] 8620.38 IOPS, 33.67 MiB/s [2024-11-20T15:25:08.787Z] 8930.29 IOPS, 34.88 MiB/s [2024-11-20T15:25:08.787Z] 9191.27 IOPS, 35.90 MiB/s 00:30:32.851 Latency(us) 00:30:32.851 [2024-11-20T15:25:08.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.851 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:32.851 Verification LBA range: start 0x0 length 0x4000 00:30:32.851 Nvme1n1 : 15.01 9193.39 35.91 13479.14 0.00 5626.61 542.72 14090.24 00:30:32.851 [2024-11-20T15:25:08.787Z] =================================================================================================================== 00:30:32.851 [2024-11-20T15:25:08.787Z] Total : 9193.39 35.91 13479.14 0.00 5626.61 542.72 14090.24 00:30:33.111 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:33.111 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:33.111 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.111 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:33.111 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.111 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:33.111 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:33.111 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:33.111 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:30:33.111 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:33.111 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:30:33.111 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:33.111 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:33.111 rmmod nvme_tcp 00:30:33.111 rmmod nvme_fabrics 00:30:33.111 rmmod nvme_keyring 00:30:33.111 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:33.111 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:30:33.111 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:30:33.111 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1466816 ']' 00:30:33.111 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1466816 00:30:33.112 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1466816 ']' 00:30:33.112 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1466816 00:30:33.112 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:30:33.112 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:33.112 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1466816 00:30:33.112 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:33.112 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:33.112 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1466816' 00:30:33.112 killing process with pid 1466816 00:30:33.112 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1466816 00:30:33.112 16:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1466816 00:30:33.373 16:25:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:33.373 16:25:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:33.373 16:25:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:33.373 16:25:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:30:33.373 16:25:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:30:33.373 16:25:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:33.373 16:25:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:30:33.373 16:25:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:33.373 16:25:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:33.373 16:25:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.373 16:25:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.373 16:25:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.286 16:25:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:35.286 00:30:35.286 real 0m27.851s 00:30:35.286 user 1m2.857s 00:30:35.286 sys 0m7.548s 00:30:35.286 16:25:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:35.286 16:25:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:35.286 ************************************ 00:30:35.286 END TEST nvmf_bdevperf 00:30:35.286 ************************************ 00:30:35.286 16:25:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:35.286 16:25:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:35.286 16:25:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:35.286 16:25:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.549 ************************************ 00:30:35.549 START TEST nvmf_target_disconnect 00:30:35.549 ************************************ 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:35.549 * Looking for test storage... 00:30:35.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:35.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.549 --rc genhtml_branch_coverage=1 00:30:35.549 --rc genhtml_function_coverage=1 00:30:35.549 --rc genhtml_legend=1 00:30:35.549 --rc geninfo_all_blocks=1 00:30:35.549 --rc geninfo_unexecuted_blocks=1 00:30:35.549 00:30:35.549 ' 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:35.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.549 --rc genhtml_branch_coverage=1 00:30:35.549 --rc genhtml_function_coverage=1 00:30:35.549 --rc genhtml_legend=1 00:30:35.549 --rc geninfo_all_blocks=1 00:30:35.549 --rc geninfo_unexecuted_blocks=1 00:30:35.549 00:30:35.549 ' 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:35.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.549 --rc genhtml_branch_coverage=1 00:30:35.549 --rc genhtml_function_coverage=1 00:30:35.549 --rc genhtml_legend=1 00:30:35.549 --rc geninfo_all_blocks=1 00:30:35.549 --rc geninfo_unexecuted_blocks=1 00:30:35.549 00:30:35.549 ' 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:35.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.549 --rc genhtml_branch_coverage=1 00:30:35.549 --rc genhtml_function_coverage=1 00:30:35.549 --rc genhtml_legend=1 00:30:35.549 --rc geninfo_all_blocks=1 00:30:35.549 --rc geninfo_unexecuted_blocks=1 00:30:35.549 00:30:35.549 ' 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:35.549 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:35.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:30:35.550 16:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:43.696 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:43.696 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:43.696 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:43.696 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:43.696 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:43.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:43.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:30:43.696 00:30:43.696 --- 10.0.0.2 ping statistics --- 00:30:43.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.696 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:30:43.697 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:43.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:43.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.373 ms 00:30:43.697 00:30:43.697 --- 10.0.0.1 ping statistics --- 00:30:43.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.697 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:30:43.697 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:43.697 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:30:43.697 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:43.697 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:43.697 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:43.697 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:43.697 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:43.697 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:43.697 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:43.697 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:43.697 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:43.697 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:43.697 16:25:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:43.697 ************************************ 00:30:43.697 START TEST nvmf_target_disconnect_tc1 00:30:43.697 ************************************ 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:43.697 [2024-11-20 16:25:19.153388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.697 [2024-11-20 16:25:19.153489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23eaad0 with addr=10.0.0.2, port=4420 00:30:43.697 [2024-11-20 16:25:19.153518] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:43.697 [2024-11-20 16:25:19.153538] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:43.697 [2024-11-20 16:25:19.153547] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:30:43.697 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:43.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:43.697 Initializing NVMe Controllers 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:43.697 00:30:43.697 real 0m0.142s 00:30:43.697 user 0m0.059s 00:30:43.697 sys 0m0.082s 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:43.697 ************************************ 00:30:43.697 END TEST nvmf_target_disconnect_tc1 00:30:43.697 ************************************ 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:43.697 ************************************ 00:30:43.697 START TEST nvmf_target_disconnect_tc2 00:30:43.697 ************************************ 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1472965 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1472965 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1472965 ']' 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:43.697 16:25:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:43.697 [2024-11-20 16:25:19.315772] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:30:43.697 [2024-11-20 16:25:19.315832] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:43.697 [2024-11-20 16:25:19.417124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:43.697 [2024-11-20 16:25:19.469173] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.697 [2024-11-20 16:25:19.469222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.697 [2024-11-20 16:25:19.469230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:43.697 [2024-11-20 16:25:19.469238] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:43.697 [2024-11-20 16:25:19.469244] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.697 [2024-11-20 16:25:19.471260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:43.697 [2024-11-20 16:25:19.471540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:43.697 [2024-11-20 16:25:19.471700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:43.697 [2024-11-20 16:25:19.471702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:44.271 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:44.271 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:44.271 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:44.271 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:44.271 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:44.271 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:44.271 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:44.271 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.271 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:44.532 Malloc0 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:44.532 [2024-11-20 16:25:20.242230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:44.532 [2024-11-20 16:25:20.282652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1473058 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:44.532 16:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:46.450 16:25:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1472965 00:30:46.450 16:25:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 [2024-11-20 16:25:22.321523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Read completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 Write completed with error (sct=0, sc=8) 00:30:46.450 starting I/O failed 00:30:46.450 [2024-11-20 16:25:22.321910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:46.450 [2024-11-20 16:25:22.322324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.450 [2024-11-20 16:25:22.322352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:46.450 qpair failed and we were unable to recover it. 00:30:46.450 [2024-11-20 16:25:22.322795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.450 [2024-11-20 16:25:22.322862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:46.450 qpair failed and we were unable to recover it. 00:30:46.450 [2024-11-20 16:25:22.323248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.450 [2024-11-20 16:25:22.323272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:46.450 qpair failed and we were unable to recover it. 00:30:46.450 [2024-11-20 16:25:22.323715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.450 [2024-11-20 16:25:22.323780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:46.450 qpair failed and we were unable to recover it. 00:30:46.450 [2024-11-20 16:25:22.324156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.450 [2024-11-20 16:25:22.324187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:46.450 qpair failed and we were unable to recover it. 00:30:46.450 [2024-11-20 16:25:22.324653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.450 [2024-11-20 16:25:22.324718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:46.450 qpair failed and we were unable to recover it. 00:30:46.450 [2024-11-20 16:25:22.325011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.450 [2024-11-20 16:25:22.325027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:46.450 qpair failed and we were unable to recover it. 00:30:46.450 [2024-11-20 16:25:22.325494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.450 [2024-11-20 16:25:22.325560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:46.450 qpair failed and we were unable to recover it. 00:30:46.450 [2024-11-20 16:25:22.325888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.450 [2024-11-20 16:25:22.325905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:46.450 qpair failed and we were unable to recover it. 00:30:46.450 [2024-11-20 16:25:22.326406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.450 [2024-11-20 16:25:22.326474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:46.450 qpair failed and we were unable to recover it. 00:30:46.450 [2024-11-20 16:25:22.326711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.450 [2024-11-20 16:25:22.326727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.326940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.326955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.327279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.327295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.327628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.327643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.327864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.327877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.328094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.328109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.328239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.328257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Write completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Write completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Write completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Write completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Write completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Write completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 Read completed with error (sct=0, sc=8) 00:30:46.451 starting I/O failed 00:30:46.451 [2024-11-20 16:25:22.328578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.451 [2024-11-20 16:25:22.328829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.328858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.329095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.329111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.329512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.329529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.329842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.329856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.330094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.330110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.330521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.330599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.330813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.330831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.331173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.331188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.331397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.331412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.331732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.331745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.332093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.332107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.332426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.332440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.332793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.332809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.333119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.333133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.333395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.333410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.333730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.333745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.334058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.334072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.334434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.334448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.334761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.334775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.335093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.451 [2024-11-20 16:25:22.335109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.451 qpair failed and we were unable to recover it. 00:30:46.451 [2024-11-20 16:25:22.335301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.335315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.336584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.336623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.336853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.336867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.337196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.337210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.338307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.338342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.338666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.338682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.338991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.339004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.340330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.340369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.340710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.340725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.341033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.341047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.341246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.341259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.342123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.342152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.342563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.342579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.343605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.343637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.343976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.343995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.344184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.344200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.344571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.344588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.344930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.344944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.345262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.345277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.345628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.345643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.345955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.345973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.346315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.346331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.346579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.346594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.346858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.346874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.347092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.347106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.347420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.347441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.347745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.347763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.348077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.348091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.348401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.348417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.348726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.348741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.349042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.349056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.349289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.349303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.349630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.349645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.349981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.349995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.350337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.350353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.350714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.350730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.351043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.351058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.452 [2024-11-20 16:25:22.351385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.452 [2024-11-20 16:25:22.351400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.452 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.351738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.351753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.352066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.352082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.352409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.352427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.352775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.352792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.353096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.353113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.353446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.353462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.353777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.353793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.354016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.354032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.354351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.354366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.354719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.354735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.355048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.355064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.355382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.355398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.355725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.355740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.356053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.356067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.356389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.356406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.356737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.356758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.357074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.357093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.357418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.357440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.357755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.357775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.358095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.358116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.358423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.358443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.358759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.358780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.359104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.359125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.359549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.359573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.359789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.359810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.360175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.360196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.361426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.361470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.361868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.361896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.362224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.362246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.362553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.362572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.362889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.362908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.363232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.363252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.363578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.363597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.363921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.363941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.364304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.364325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.364656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.364677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.364997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.453 [2024-11-20 16:25:22.365018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.453 qpair failed and we were unable to recover it. 00:30:46.453 [2024-11-20 16:25:22.365339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.365359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.365681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.365702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.366014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.366034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.366238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.366257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.366565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.366585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.366935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.366961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.367333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.367358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.367690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.367715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.368052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.368076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.368425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.368451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.368788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.368811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.369020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.369045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.369352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.369377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.369751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.369775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.370116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.370142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.370412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.370436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.370811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.370835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.371180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.371207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.371518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.371543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.371748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.371773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.372119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.372144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.372494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.372519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.372726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.372752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.373089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.373115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.373459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.373485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.373832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.373857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.454 qpair failed and we were unable to recover it. 00:30:46.454 [2024-11-20 16:25:22.374208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.454 [2024-11-20 16:25:22.374234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.455 qpair failed and we were unable to recover it. 00:30:46.455 [2024-11-20 16:25:22.374602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.455 [2024-11-20 16:25:22.374626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.455 qpair failed and we were unable to recover it. 00:30:46.455 [2024-11-20 16:25:22.375001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.455 [2024-11-20 16:25:22.375024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.455 qpair failed and we were unable to recover it. 00:30:46.455 [2024-11-20 16:25:22.375376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.455 [2024-11-20 16:25:22.375403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.455 qpair failed and we were unable to recover it. 00:30:46.455 [2024-11-20 16:25:22.375747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.455 [2024-11-20 16:25:22.375774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.455 qpair failed and we were unable to recover it. 00:30:46.455 [2024-11-20 16:25:22.378571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.455 [2024-11-20 16:25:22.378634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.455 qpair failed and we were unable to recover it. 00:30:46.455 [2024-11-20 16:25:22.379035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.455 [2024-11-20 16:25:22.379071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.455 qpair failed and we were unable to recover it. 00:30:46.455 [2024-11-20 16:25:22.379318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.455 [2024-11-20 16:25:22.379353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.455 qpair failed and we were unable to recover it. 00:30:46.455 [2024-11-20 16:25:22.379733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.455 [2024-11-20 16:25:22.379766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.455 qpair failed and we were unable to recover it. 00:30:46.455 [2024-11-20 16:25:22.380247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.455 [2024-11-20 16:25:22.380284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.455 qpair failed and we were unable to recover it. 00:30:46.455 [2024-11-20 16:25:22.380690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.455 [2024-11-20 16:25:22.380723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.455 qpair failed and we were unable to recover it. 00:30:46.726 [2024-11-20 16:25:22.380979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.726 [2024-11-20 16:25:22.381016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.726 qpair failed and we were unable to recover it. 00:30:46.726 [2024-11-20 16:25:22.381435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.726 [2024-11-20 16:25:22.381468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.726 qpair failed and we were unable to recover it. 00:30:46.726 [2024-11-20 16:25:22.381820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.726 [2024-11-20 16:25:22.381852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.726 qpair failed and we were unable to recover it. 00:30:46.726 [2024-11-20 16:25:22.382208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.726 [2024-11-20 16:25:22.382241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.726 qpair failed and we were unable to recover it. 00:30:46.726 [2024-11-20 16:25:22.382609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.726 [2024-11-20 16:25:22.382641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.726 qpair failed and we were unable to recover it. 00:30:46.726 [2024-11-20 16:25:22.382998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.726 [2024-11-20 16:25:22.383029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.726 qpair failed and we were unable to recover it. 00:30:46.726 [2024-11-20 16:25:22.383406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.726 [2024-11-20 16:25:22.383440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.726 qpair failed and we were unable to recover it. 00:30:46.726 [2024-11-20 16:25:22.383851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.726 [2024-11-20 16:25:22.383884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.726 qpair failed and we were unable to recover it. 00:30:46.726 [2024-11-20 16:25:22.384234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.726 [2024-11-20 16:25:22.384267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.726 qpair failed and we were unable to recover it. 00:30:46.726 [2024-11-20 16:25:22.384639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.726 [2024-11-20 16:25:22.384671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.726 qpair failed and we were unable to recover it. 00:30:46.726 [2024-11-20 16:25:22.385019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.726 [2024-11-20 16:25:22.385051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.726 qpair failed and we were unable to recover it. 00:30:46.726 [2024-11-20 16:25:22.385417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.385451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.385702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.385734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.386074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.386106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.386499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.386533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.386883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.386916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.387277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.387312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.387678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.387709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.387943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.387978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.388335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.388368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.388769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.388801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.389172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.389204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.389463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.389494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.389847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.389881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.390120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.390153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.390539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.390571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.390929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.390960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.391316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.391351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.391714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.391744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.392102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.392133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.392499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.392531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.392887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.392919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.393280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.393312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.393686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.393723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.394087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.394118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.394497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.394534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.394910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.394942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.395302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.395335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.395567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.395597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.395966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.395997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.396338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.396371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.396792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.396823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.397174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.397208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.397568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.397599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.397949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.397981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.398341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.398373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.398731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.398761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.399121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.399154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.727 [2024-11-20 16:25:22.399398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.727 [2024-11-20 16:25:22.399432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.727 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.399807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.399839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.401673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.401742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.402139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.402210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.402576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.402609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.402963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.402995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.403339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.403371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.403742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.403774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.404141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.404187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.404571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.404605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.404987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.405019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.405344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.405378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.405677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.405709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.405947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.405979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.406378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.406409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.406762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.406794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.407144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.407192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.407591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.407622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.407975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.408008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.408418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.408451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.408815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.408847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.409213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.409247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.409608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.409639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.409878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.409909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.410289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.410322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.410675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.410712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.411063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.411097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.411469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.411502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.411851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.411882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.412234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.412267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.412639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.412670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.413068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.413099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.413469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.413501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.413855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.413888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.414246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.414279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.414659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.414690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.415040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.415073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.415437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.415472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.728 [2024-11-20 16:25:22.415715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.728 [2024-11-20 16:25:22.415748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.728 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.416120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.416156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.416549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.416580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.416943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.416976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.417340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.417374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.417766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.417797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.418151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.418196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.420028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.420095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.420524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.420563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.420955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.420988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.421337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.421370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.421726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.421757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.422113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.422147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.422535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.422566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.422849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.422886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.423247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.423280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.423555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.423589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.423948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.423979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.424219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.424253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.424664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.424695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.425061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.425092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.425440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.425475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.425827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.425859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.427649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.427713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.428100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.428139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.428648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.428682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.429033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.429066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.429428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.429461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.429724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.429757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.430110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.430143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.430443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.430478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.430837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.430868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.431217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.431252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.431626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.431658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.432019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.432049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.432387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.432419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.432784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.432815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.433182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.729 [2024-11-20 16:25:22.433214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.729 qpair failed and we were unable to recover it. 00:30:46.729 [2024-11-20 16:25:22.434837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.434896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.435323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.435359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.437093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.437150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.437562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.437597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.437962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.437993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.438343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.438375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.438755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.438789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.439194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.439228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.440155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.440230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.440507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.440540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.440914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.440945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.441304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.441335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.441693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.441725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.442079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.442110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.442480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.442513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.442891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.442922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.443288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.443327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.443705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.443735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.444102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.444132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.444502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.444533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.444891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.444923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.445287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.445319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.445683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.445712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.446084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.446113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.446476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.446505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.446882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.446912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.447249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.447279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.447647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.447676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.448032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.448062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.448438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.448470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.448833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.448864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.449228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.449259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.449623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.449653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.449999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.450029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.450385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.450415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.450777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.450805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.451067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.451096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.730 [2024-11-20 16:25:22.451476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.730 [2024-11-20 16:25:22.451507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.730 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.451852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.451883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.452244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.452277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.452684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.452714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.453057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.453087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.453427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.453457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.453816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.453848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.454252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.454282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.454628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.454657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.455015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.455045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.455391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.455422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.455765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.455796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.456146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.456188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.456540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.456571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.456816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.456845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.457191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.457221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.457565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.457595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.457952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.457982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.458340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.458370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.458733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.458778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.459110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.459139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.459504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.459534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.459903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.459932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.460196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.460227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.460595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.460627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.460966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.460997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.461420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.461452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.461794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.461824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.462197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.462227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.462648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.462678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.463005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.463035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.463414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.731 [2024-11-20 16:25:22.463444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.731 qpair failed and we were unable to recover it. 00:30:46.731 [2024-11-20 16:25:22.463808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.463838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.464209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.464240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.464621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.464649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.465017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.465047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.465434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.465465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.465718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.465750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.466108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.466137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.466499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.466528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.466893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.466923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.467299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.467330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.467701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.467730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.468017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.468046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.468461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.468492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.468844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.468874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.469129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.469170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.469536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.469566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.469902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.469932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.470285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.470316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.470686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.470715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.471083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.471111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.471463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.471493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.471724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.471757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.472141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.472185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.472524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.472553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.472913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.472942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.473216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.473247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.473508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.473538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.473911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.473948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.474311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.474344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.474698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.474727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.475098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.475127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.475549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.475579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.475935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.475966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.476338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.476368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.476644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.476672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.477023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.477052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.477391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.477422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.477787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.732 [2024-11-20 16:25:22.477816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.732 qpair failed and we were unable to recover it. 00:30:46.732 [2024-11-20 16:25:22.478186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.478217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.478577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.478607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.478956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.478985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.479366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.479397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.479769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.479799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.480140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.480194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.480593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.480623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.480979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.481009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.481423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.481455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.481803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.481832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.482185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.482215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.482564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.482595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.482968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.482999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.483386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.483417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.483853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.483883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.484179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.484210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.484583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.484612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.485056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.485086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.485453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.485486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.485845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.485874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.486207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.486239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.486638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.486667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.487034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.487063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.487419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.487451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.487802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.487832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.488089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.488120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.488527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.488557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.488897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.488927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.489293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.489324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.489686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.489721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.489971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.490000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.490251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.490285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.490661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.490690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.490916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.490945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.491359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.491389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.491756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.491785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.492146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.492190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.733 [2024-11-20 16:25:22.492563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.733 [2024-11-20 16:25:22.492593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.733 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.492960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.492989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.493340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.493371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.493726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.493755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.494173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.494205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.494543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.494574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.494942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.494972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.495335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.495366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.495718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.495748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.496100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.496130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.496404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.496436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.496794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.496825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.497186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.497218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.497623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.497653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.497901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.497930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.498296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.498326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.498697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.498726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.498990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.499020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.499412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.499443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.499821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.499851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.500206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.500238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.500507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.500536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.500960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.500991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.501348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.501380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.501773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.501802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.502130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.502174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.502446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.502476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.502763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.502793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.503144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.503188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.503609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.503639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.503869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.503898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.504152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.504198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.504596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.504632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.504860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.504891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.505287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.505318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.505680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.505710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.506077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.506107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.506361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.506391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.734 qpair failed and we were unable to recover it. 00:30:46.734 [2024-11-20 16:25:22.506733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.734 [2024-11-20 16:25:22.506763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.507126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.507156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.507580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.507610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.507958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.507989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.508366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.508398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.508807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.508836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.509077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.509106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.509502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.509532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.509910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.509940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.510308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.510339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.510684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.510713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.511072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.511102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.511449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.511481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.511818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.511849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.512215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.512245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.512606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.512637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.513020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.513049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.513307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.513337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.513578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.513607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.513955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.513985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.514339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.514370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.514617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.514646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.514996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.515026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.515379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.515411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.515699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.515728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.516089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.516118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.516376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.516406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.516763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.516794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.517153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.517196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.517544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.517573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.517812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.517843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.518194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.518232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.518555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.518583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.518962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.735 [2024-11-20 16:25:22.518994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.735 qpair failed and we were unable to recover it. 00:30:46.735 [2024-11-20 16:25:22.519352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.519389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.519789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.519818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.520188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.520218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.520643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.520673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.520970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.521000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.521367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.521397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.521783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.521813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.522207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.522238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.522615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.522643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.522999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.523029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.523389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.523419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.523779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.523809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.524058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.524090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.524445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.524476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.524983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.525016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.525493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.525526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.525865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.525896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.526263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.526294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.526641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.526672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.526920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.526950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.527290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.527320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.527570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.527602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.527964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.527995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.528352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.528384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.528744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.528773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.529149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.529192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.529540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.529569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.529923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.529953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.530318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.530349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.530712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.530741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.531121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.531150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.531501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.531532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.531896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.531926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.532294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.532325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.532687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.532719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.533064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.533096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.533447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.533477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.736 [2024-11-20 16:25:22.533741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.736 [2024-11-20 16:25:22.533770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.736 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.534114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.534143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.534430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.534460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.534831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.534867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.535223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.535256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.535607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.535637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.535987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.536017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.536423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.536454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.536804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.536833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.537198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.537228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.537588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.537617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.538030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.538059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.538401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.538433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.538789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.538818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.539073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.539103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.539463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.539494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.539853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.539883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.540244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.540274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.540564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.540594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.540950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.540979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.541218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.541250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.541620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.541650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.542016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.542046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.542412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.542442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.542875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.542905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.543265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.543296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.543633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.543662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.544033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.544064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.544406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.544437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.544798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.544829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.545193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.545225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.545584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.545612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.545929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.545959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.546329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.546360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.546693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.546724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.547071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.547101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.547478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.547509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.547885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.547916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.737 [2024-11-20 16:25:22.548272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.737 [2024-11-20 16:25:22.548303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.737 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.548686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.548715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.549076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.549105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.549347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.549380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.549719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.549749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.550127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.550175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.550529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.550560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.550964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.550993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.551223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.551255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.551679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.551709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.552079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.552109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.552491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.552522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.552775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.552805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.553150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.553196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.553544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.553574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.553914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.553944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.554355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.554386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.554746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.554776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.555132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.555171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.555502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.555532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.555901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.555929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.556292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.556323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.556664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.556694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.557061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.557090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.557461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.557491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.557864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.557894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.558258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.558289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.558652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.558682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.559050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.559079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.559462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.559494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.559845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.559874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.560241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.560272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.560665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.560695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.561127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.561156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.561597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.561626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.561976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.562007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.562381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.562412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.562749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.738 [2024-11-20 16:25:22.562778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.738 qpair failed and we were unable to recover it. 00:30:46.738 [2024-11-20 16:25:22.563154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.563195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.563541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.563571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.563939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.563969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.564324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.564354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.564780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.564809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.565140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.565186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.565409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.565441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.565831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.565867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.566207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.566239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.566513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.566543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.566911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.566940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.567355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.567386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.567740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.567769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.568128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.568168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.568518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.568547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.568929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.568958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.569312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.569342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.569723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.569753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.570095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.570125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.570489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.570520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.570885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.570916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.571333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.571365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.571604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.571636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.571883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.571915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.572184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.572214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.572561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.572590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.572866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.572895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.573146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.573188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.573595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.573625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.573868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.573896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.574258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.574289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.574637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.574668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.575042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.575072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.575431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.575463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.575834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.575863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.576234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.576265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.576506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.576536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.576891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.739 [2024-11-20 16:25:22.576929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.739 qpair failed and we were unable to recover it. 00:30:46.739 [2024-11-20 16:25:22.577266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.577296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.577676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.577707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.578094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.578123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.578481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.578512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.578886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.578915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.579273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.579305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.579669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.579700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.579957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.579986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.580332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.580364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.580612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.580649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.581002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.581031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.581383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.581414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.581671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.581703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.582043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.582075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.582330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.582364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.582759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.582788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.583153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.583196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.583546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.583575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.583942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.583973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.584334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.584365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.584711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.584740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.585096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.585125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.585568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.585598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.585986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.586017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.586383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.586413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.586631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.586662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.587017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.587047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.587391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.587423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.587757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.587787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.588154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.588210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.588472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.588505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.588855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.740 [2024-11-20 16:25:22.588886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.740 qpair failed and we were unable to recover it. 00:30:46.740 [2024-11-20 16:25:22.589222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.589254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.589423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.589451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.589704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.589734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.589993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.590021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.590280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.590310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.590664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.590694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.590938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.590968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.591215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.591245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.591565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.591594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.591965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.591994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.592338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.592368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.592603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.592634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.593007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.593037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.593393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.593425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.593774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.593804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.594025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.594055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.594423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.594454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.594799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.594834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.595198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.595230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.595586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.595617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.595988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.596019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.596388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.596419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.596789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.596820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.597184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.597214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.597419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.597448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.597796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.597827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.598156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.598201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.598542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.598571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.598917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.598946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.599330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.599361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.599592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.599622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.599991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.600022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.600374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.600405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.600823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.600852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.601248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.601300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.601649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.601678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.602662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.602718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.603071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.741 [2024-11-20 16:25:22.603102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.741 qpair failed and we were unable to recover it. 00:30:46.741 [2024-11-20 16:25:22.603461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.603493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.603840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.603870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.604236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.604269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.604672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.604702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.605056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.605094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.605495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.605527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.605805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.605834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.606210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.606241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.606619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.606649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.607008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.607036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.607390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.607422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.607783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.607814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.608031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.608061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.608400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.608431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.608796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.608825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.609180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.609211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.609466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.609497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.609847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.609878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.610249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.610280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.610529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.610564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.610849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.610878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.611231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.611263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.611508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.611537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.611907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.611937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.612201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.612236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.612555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.612585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.612969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.612998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.613334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.613365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.613638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.613667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.614012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.614042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.614382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.614413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.614759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.614789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.615141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.615182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.615531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.615562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.615926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.615956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.616196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.616226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.616596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.616625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.616973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.742 [2024-11-20 16:25:22.617003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.742 qpair failed and we were unable to recover it. 00:30:46.742 [2024-11-20 16:25:22.617382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.617415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.617670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.617700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.617968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.617998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.618443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.618475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.618847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.618877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.619260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.619290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.619743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.619772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.620150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.620194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.620456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.620487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.620842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.620872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.621243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.621274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.621642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.621671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.622030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.622059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.622278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.622310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.622555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.622586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.622946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.622976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.623362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.623392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.623754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.623783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.624017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.624046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.624437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.624467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.624837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.624867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.625227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.625259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.625550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.625580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.625932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.625962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.626336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.626367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.626662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.626691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.626897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.626928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.627175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.627205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.627582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.627611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.627855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.627889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.628121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.628153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.628532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.628562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.628926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.628956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.629294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.629326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.629687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.629716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.630058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.630089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.630459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.630491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.743 [2024-11-20 16:25:22.630859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.743 [2024-11-20 16:25:22.630889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.743 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.631248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.631278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.631658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.631688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.631943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.631973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.632337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.632368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.632600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.632629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.632774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.632805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.633183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.633214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.633546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.633577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.633992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.634023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.634388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.634420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.634800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.634836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.635176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.635208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.635624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.635653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.636005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.636034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.636379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.636411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.636779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.636809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.637066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.637095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.637468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.637498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.637875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.637905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.638198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.638229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.638637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.638668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.639016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.639047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.639413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.639444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.639815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.639845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.640217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.640251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.640584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.640614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.640852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.640881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.641214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.641245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.641576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.641606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.641848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.641877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.642291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.642321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.642572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.642601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.642809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.642842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.643269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.643300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.643651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.643682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.643925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.643954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.644348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.644379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.744 qpair failed and we were unable to recover it. 00:30:46.744 [2024-11-20 16:25:22.644736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.744 [2024-11-20 16:25:22.644766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.745 qpair failed and we were unable to recover it. 00:30:46.745 [2024-11-20 16:25:22.645118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.745 [2024-11-20 16:25:22.645149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.745 qpair failed and we were unable to recover it. 00:30:46.745 [2024-11-20 16:25:22.645509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.745 [2024-11-20 16:25:22.645540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.745 qpair failed and we were unable to recover it. 00:30:46.745 [2024-11-20 16:25:22.645928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.745 [2024-11-20 16:25:22.645957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.745 qpair failed and we were unable to recover it. 00:30:46.745 [2024-11-20 16:25:22.646326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.745 [2024-11-20 16:25:22.646357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.745 qpair failed and we were unable to recover it. 00:30:46.745 [2024-11-20 16:25:22.646704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.745 [2024-11-20 16:25:22.646733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.745 qpair failed and we were unable to recover it. 00:30:46.745 [2024-11-20 16:25:22.647043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.745 [2024-11-20 16:25:22.647074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.745 qpair failed and we were unable to recover it. 00:30:46.745 [2024-11-20 16:25:22.647436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.745 [2024-11-20 16:25:22.647468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.745 qpair failed and we were unable to recover it. 00:30:46.745 [2024-11-20 16:25:22.647816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.745 [2024-11-20 16:25:22.647847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.745 qpair failed and we were unable to recover it. 00:30:46.745 [2024-11-20 16:25:22.648075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.745 [2024-11-20 16:25:22.648104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.745 qpair failed and we were unable to recover it. 00:30:46.745 [2024-11-20 16:25:22.648486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.745 [2024-11-20 16:25:22.648518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.745 qpair failed and we were unable to recover it. 00:30:46.745 [2024-11-20 16:25:22.648897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.745 [2024-11-20 16:25:22.648926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.745 qpair failed and we were unable to recover it. 00:30:46.745 [2024-11-20 16:25:22.649200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.745 [2024-11-20 16:25:22.649231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.745 qpair failed and we were unable to recover it. 00:30:46.745 [2024-11-20 16:25:22.649542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.745 [2024-11-20 16:25:22.649582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.745 qpair failed and we were unable to recover it. 00:30:46.745 [2024-11-20 16:25:22.649933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.745 [2024-11-20 16:25:22.649963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:46.745 qpair failed and we were unable to recover it. 00:30:47.018 [2024-11-20 16:25:22.650355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.018 [2024-11-20 16:25:22.650388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.018 qpair failed and we were unable to recover it. 00:30:47.018 [2024-11-20 16:25:22.650767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.018 [2024-11-20 16:25:22.650797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.018 qpair failed and we were unable to recover it. 00:30:47.018 [2024-11-20 16:25:22.651214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.018 [2024-11-20 16:25:22.651245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.018 qpair failed and we were unable to recover it. 00:30:47.018 [2024-11-20 16:25:22.651610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.018 [2024-11-20 16:25:22.651639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.018 qpair failed and we were unable to recover it. 00:30:47.018 [2024-11-20 16:25:22.651861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.018 [2024-11-20 16:25:22.651893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.018 qpair failed and we were unable to recover it. 00:30:47.018 [2024-11-20 16:25:22.652240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.018 [2024-11-20 16:25:22.652270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.018 qpair failed and we were unable to recover it. 00:30:47.018 [2024-11-20 16:25:22.652532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.652565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.652911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.652942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.653288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.653318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.653696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.653725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.654078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.654107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.654536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.654566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.654937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.654967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.655414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.655444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.655811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.655849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.656224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.656254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.656528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.656557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.656935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.656964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.657219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.657249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.657629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.657658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.658011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.658041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.658415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.658445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.658814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.658843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.659136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.659191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.659544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.659573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.659959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.659989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.660337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.660366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.660627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.660656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.661038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.661067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.661487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.661517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.661844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.661873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.662240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.662270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.662538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.662567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.662957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.662987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.663381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.663411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.663748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.663778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.664152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.664195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.664568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.664597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.664978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.665015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.665276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.665307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.665693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.665722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.666133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.666170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.666412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.666443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.019 [2024-11-20 16:25:22.666828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.019 [2024-11-20 16:25:22.666857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.019 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.667231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.667262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.667734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.667764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.668096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.668126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.668494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.668524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.668985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.669014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.669260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.669290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.669658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.669687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.670061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.670090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.670457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.670487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.670844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.670873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.671238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.671270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.671638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.671667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.672007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.672037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.672395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.672425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.672791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.672819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.673198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.673229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.673569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.673599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.673980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.674009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.674351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.674381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.674737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.674766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.675130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.675170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.675567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.675596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.675943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.675972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.676339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.676369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.676734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.676762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.677157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.677213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.677585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.677615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.677986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.678015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.678294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.678325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.678710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.678739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.679099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.679130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.679510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.679541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.679786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.679817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.680180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.680211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.680541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.680577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.680973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.681002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.681391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.681432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.020 qpair failed and we were unable to recover it. 00:30:47.020 [2024-11-20 16:25:22.683227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.020 [2024-11-20 16:25:22.683292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.683668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.683703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.684035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.684065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.684412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.684443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.684815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.684844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.685298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.685328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.685693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.685722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.686075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.686105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.686407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.686446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.686828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.686857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.687111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.687141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.687555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.687585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.687949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.687978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.688339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.688370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.688717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.688748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.689136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.689177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.689519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.689547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.689918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.689947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.690212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.690245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.690629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.690659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.691015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.691045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.691397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.691428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.691759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.691788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.692150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.692193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.692625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.692657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.693012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.693041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.693330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.693361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.693718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.693747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.694109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.694138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.694511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.694541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.694897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.694927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.695277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.695310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.695656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.695685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.696036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.696066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.696416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.696447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.696769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.696799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.697175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.697207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.697641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.021 [2024-11-20 16:25:22.697677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.021 qpair failed and we were unable to recover it. 00:30:47.021 [2024-11-20 16:25:22.698001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.698031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.698397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.698428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.698676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.698705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.699062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.699092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.699478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.699508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.699869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.699899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.700147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.700191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.700541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.700571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.700897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.700927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.701282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.701314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.701664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.701694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.702061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.702090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.702489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.702520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.702885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.702914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.703286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.703317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.703576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.703607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.703928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.703958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.704212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.704245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.704595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.704625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.704999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.705028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.705477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.705508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.705868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.705900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.706284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.706315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.706659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.706690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.707061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.707090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.707421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.707452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.707815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.707844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.708081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.708113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.708554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.708584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.708944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.708972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.709378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.709409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.709783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.709813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.710188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.710220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.710542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.710571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.710930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.710959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.711342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.711374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.711729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.022 [2024-11-20 16:25:22.711758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.022 qpair failed and we were unable to recover it. 00:30:47.022 [2024-11-20 16:25:22.712110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.712140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.712530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.712560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.712911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.712947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.713283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.713315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.713665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.713696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.714055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.714085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.714428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.714459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.714807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.714838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.715201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.715232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.715616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.715645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.715995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.716024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.716410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.716441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.716851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.716881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.717104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.717135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.717535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.717566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.717925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.717955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.718308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.718339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.718754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.718784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.719139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.719197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.719598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.719629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.719996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.720026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.720232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.720265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.720632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.720663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.721002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.721031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.721408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.721438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.721798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.721827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.722205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.722236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.722629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.722657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.722989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.723018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.723379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.723411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.023 qpair failed and we were unable to recover it. 00:30:47.023 [2024-11-20 16:25:22.723797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.023 [2024-11-20 16:25:22.723826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.724191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.724223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.724584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.724614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.724972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.725000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.725376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.725406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.725734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.725763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.726127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.726171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.726529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.726558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.726928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.726957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.727208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.727241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.727513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.727542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.727895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.727924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.728287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.728324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.728666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.728696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.729059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.729089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.729464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.729495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.729847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.729877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.730244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.730275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.730618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.730648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.731004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.731034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.731298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.731329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.731725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.731755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.732126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.732156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.732524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.732554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.732925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.732954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.733313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.733344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.733708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.733737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.734083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.734112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.734469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.734501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.734865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.734894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.735252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.735284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.735654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.735683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.735925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.735954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.736304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.736334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.736711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.736741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.737107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.737136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.737378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.024 [2024-11-20 16:25:22.737411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.024 qpair failed and we were unable to recover it. 00:30:47.024 [2024-11-20 16:25:22.737787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.737818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.738154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.738195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.738565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.738594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.738961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.738991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.739344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.739374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.739733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.739763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.740075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.740105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.740484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.740514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.740879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.740908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.741259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.741288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.741627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.741657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.742017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.742046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.742412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.742442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.742784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.742813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.743060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.743091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.743461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.743497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.743838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.743869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.744243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.744274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.744642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.744671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.745038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.745068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.745422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.745453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.745798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.745827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.746193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.746224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.746489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.746518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.746875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.746903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.747262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.747293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.747659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.747688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.747905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.747937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.748284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.748315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.748645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.748676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.749045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.749075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.749410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.749442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.749682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.749714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.750070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.750101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.750411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.750442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.750789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.750820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.751093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.751122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.025 [2024-11-20 16:25:22.751513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.025 [2024-11-20 16:25:22.751544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.025 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.751903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.751933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.752288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.752318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.752657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.752686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.753045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.753074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.753436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.753467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.753862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.753891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.754238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.754269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.754664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.754693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.755052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.755081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.755460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.755490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.755852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.755882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.756232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.756263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.756519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.756550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.756912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.756941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.757301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.757332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.757555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.757587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.757953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.757981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.758350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.758393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.758761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.758791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.759180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.759211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.759563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.759592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.759967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.759996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.760367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.760398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.760756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.760786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.761038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.761069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.761426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.761457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.761808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.761837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.762197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.762227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.762611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.762639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.763008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.763037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.763403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.763434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.763770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.763800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.764153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.764200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.764551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.764580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.764943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.764972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.765335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.765366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.765714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.026 [2024-11-20 16:25:22.765742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.026 qpair failed and we were unable to recover it. 00:30:47.026 [2024-11-20 16:25:22.766080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.766115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.766482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.766512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.766872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.766901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.767264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.767294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.767649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.767678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.768044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.768074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.768448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.768479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.768840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.768869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.769232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.769263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.769515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.769546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.769883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.769914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.770254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.770285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.770610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.770640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.770892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.770921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.771273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.771303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.771665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.771694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.772067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.772098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.772481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.772511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.772897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.772927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.773275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.773305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.773660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.773694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.774060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.774091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.774474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.774504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.774867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.774896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.775257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.775287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.775667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.775696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.776024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.776054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.776413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.776443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.776799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.776828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.777178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.777208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.777541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.777571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.777921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.777949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.778309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.778339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.778691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.778720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.779093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.779122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.779368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.779401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.779763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.779794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.027 [2024-11-20 16:25:22.780147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.027 [2024-11-20 16:25:22.780201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.027 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.780551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.780581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.780945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.780975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.781339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.781369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.781734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.781763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.782126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.782154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.782497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.782527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.782756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.782788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.783170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.783201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.783582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.783611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.783948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.783977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.784359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.784391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.784731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.784762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.785135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.785174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.785517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.785546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.785801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.785832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.786191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.786221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.786616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.786645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.787000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.787031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.787384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.787414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.787781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.787810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.788180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.788211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.788573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.788602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.788976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.789012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.789378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.789408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.789775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.789804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.790171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.790202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.790557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.790586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.790954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.790983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.791359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.791389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.791726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.791755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.792120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.792149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.792544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.792574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.028 [2024-11-20 16:25:22.792946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.028 [2024-11-20 16:25:22.792975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.028 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.793334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.793364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.793807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.793837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.794177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.794207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.794509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.794538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.794897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.794926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.795285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.795315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.795691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.795720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.796072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.796101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.796465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.796496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.796925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.796954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.797305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.797336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.797675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.797706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.798063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.798092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.798343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.798373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.798739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.798768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.799128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.799166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.799508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.799538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.799880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.799908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.800268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.800298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.800657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.800686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.801041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.801070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.801444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.801475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.801850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.801880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.802244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.802274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.802528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.802557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.802910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.802939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.803292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.803322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.803720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.803750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.803988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.804019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.804312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.804349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.804693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.804723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.804937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.804965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.805322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.805353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.805753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.805783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.806140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.806179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.806538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.806566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.806841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.806870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.029 qpair failed and we were unable to recover it. 00:30:47.029 [2024-11-20 16:25:22.807110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.029 [2024-11-20 16:25:22.807142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.807529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.807559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.807921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.807950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.808320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.808350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.808715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.808744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.809108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.809138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.809567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.809598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.809839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.809868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.810221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.810252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.810488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.810517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.810903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.810932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.811370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.811400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.811763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.811792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.812154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.812194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.812483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.812513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.812771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.812801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.813151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.813200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.813544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.813574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.813939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.813968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.814223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.814255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.814531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.814561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.814920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.814948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.815314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.815345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.815697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.815727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.816100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.816129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.816538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.816568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.816919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.816949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.817330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.817361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.817725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.817755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.818117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.818147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.818517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.818547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.818844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.818872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.819233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.819264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.819621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.819652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.819998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.820028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.820381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.820412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.820773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.820802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.821172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.030 [2024-11-20 16:25:22.821202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.030 qpair failed and we were unable to recover it. 00:30:47.030 [2024-11-20 16:25:22.821564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.821594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.821936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.821964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.822329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.822359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.822740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.822769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.823000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.823028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.823247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.823282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.823668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.823697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.823953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.823982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.824334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.824365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.824707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.824737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.824984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.825014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.825384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.825414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.825770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.825799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.826170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.826201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.826552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.826582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.826839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.826868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.827292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.827323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.827676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.827706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.828059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.828088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.828424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.828456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.828810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.828841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.829205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.829242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.829505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.829534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.829909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.829938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.830309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.830340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.830706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.830735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.831095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.831124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.831461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.831491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.831846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.831875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.832232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.832264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.832625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.832654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.833023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.833053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.833393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.833424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.833785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.833815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.834174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.834206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.834605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.834635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.835014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.835043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.031 [2024-11-20 16:25:22.835289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.031 [2024-11-20 16:25:22.835320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.031 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.835692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.835721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.835995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.836026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.836279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.836310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.836706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.836736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.837113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.837143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.837515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.837545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.837775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.837805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.838184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.838214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.838575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.838604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.838965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.838996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.839338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.839370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.839738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.839768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.840017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.840047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.840430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.840461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.840819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.840849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.841259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.841291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.841678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.841707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.842064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.842094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.842351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.842382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.842630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.842659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.843037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.843067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.843433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.843465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.843712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.843741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.844092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.844128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.844553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.844585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.844936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.844966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.845331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.845362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.845725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.845755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.846125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.846157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.846511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.846541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.846882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.846916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.847283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.847314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.032 [2024-11-20 16:25:22.847676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.032 [2024-11-20 16:25:22.847705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.032 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.848084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.848115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.848537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.848569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.848909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.848939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.849318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.849350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.849710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.849740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.850184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.850216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.850470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.850500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.850854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.850884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.851257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.851289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.851640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.851671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.852098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.852127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.852418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.852449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.852841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.852871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.853233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.853265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.853554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.853584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.853939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.853978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.854249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.854280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.854646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.854676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.854929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.854958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.855246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.855278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.855654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.855682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.856054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.856085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.856324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.856357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.856753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.856784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.857151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.857197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.857561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.857591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.857944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.857973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.858382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.858413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.858674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.858704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.859107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.859138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.859508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.859546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.859822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.859852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.860205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.860237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.860524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.860554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.860909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.860940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.861310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.861341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.861698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.861727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.033 [2024-11-20 16:25:22.862093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.033 [2024-11-20 16:25:22.862128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.033 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.862505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.862535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.862906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.862941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.863279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.863317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.863727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.863757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.864118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.864148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.864590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.864622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.865021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.865052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.865301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.865332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.865587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.865617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.865891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.865922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.866266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.866297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.866619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.866648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.867035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.867065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.867487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.867518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.867906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.867937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.868055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.868083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.868320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.868351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.868693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.868724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.869102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.869131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.869513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.869545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.869906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.869934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.870322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.870353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.870728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.870759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.871007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.871037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.871400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.871433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.871774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.871803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.872156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.872199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.872552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.872582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.872951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.872982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.873352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.873382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.873754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.873784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.874170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.874201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.874538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.874573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.874805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.874834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.875201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.875232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.875607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.875636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.875996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.876025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.034 qpair failed and we were unable to recover it. 00:30:47.034 [2024-11-20 16:25:22.876398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.034 [2024-11-20 16:25:22.876428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.876794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.876823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.877079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.877108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.877357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.877387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.877760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.877789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.878149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.878188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.878611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.878641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.879012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.879042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.879387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.879418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.879763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.879802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.880186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.880217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.880567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.880597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.880946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.880976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.881331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.881362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.881489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.881520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.881891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.881921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.882292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.882331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.882665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.882694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.883055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.883085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.883338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.883372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.883588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.883617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.883996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.884024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.884388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.884420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.884780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.884809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.885225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.885255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.885714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.885743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.886126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.886157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.886535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.886564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.886850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.886879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.887231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.887262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.887548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.887576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.887994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.888023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.888405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.888434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.888647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.888679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.889035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.889065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.889425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.889461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.889821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.889851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.890099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.035 [2024-11-20 16:25:22.890128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.035 qpair failed and we were unable to recover it. 00:30:47.035 [2024-11-20 16:25:22.890513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.890544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.890897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.890927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.891302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.891334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.891680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.891710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.892051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.892080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.892420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.892452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.892816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.892845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.893218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.893249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.893590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.893618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.893995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.894025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.894295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.894325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.894693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.894722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.894989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.895018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.895416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.895448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.895780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.895809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.896073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.896103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.896359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.896389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.896751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.896781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.897016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.897045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.897386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.897418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.897780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.897810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.898114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.898143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.898541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.898571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.898951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.898981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.899361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.899392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.899762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.899791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.900154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.900201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.900445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.900477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.900838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.900869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.901218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.901249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.901642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.901672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.902053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.902084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.902342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.902373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.902627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.902657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.902949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.902979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.903337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.903370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.903645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.903675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.903939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.036 [2024-11-20 16:25:22.903978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.036 qpair failed and we were unable to recover it. 00:30:47.036 [2024-11-20 16:25:22.904364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.904396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.904794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.904824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.905266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.905297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.905626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.905656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.906006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.906035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.906417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.906447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.906797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.906826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.907214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.907244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.907650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.907679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.908036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.908065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.908462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.908493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.908847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.908875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.909254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.909283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.909652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.909682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.910075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.910104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.910462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.910493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.910843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.910873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.911112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.911140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.911550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.911580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.911942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.911971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.912339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.912369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.912745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.912774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.913119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.913149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.913570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.913601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.913957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.913985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.914340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.914371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.914725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.914755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.915126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.915156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.915530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.915561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.915900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.915931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.916295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.916325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.916704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.916733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.916980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.917010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.037 qpair failed and we were unable to recover it. 00:30:47.037 [2024-11-20 16:25:22.917303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.037 [2024-11-20 16:25:22.917335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.917702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.917732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.918090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.918120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.918561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.918592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.918970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.918998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.919353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.919383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.919775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.919810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.920174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.920205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.920555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.920584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.920952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.920982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.921237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.921267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.921644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.921673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.922036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.922066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.922401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.922432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.922794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.922822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.923054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.923083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.923472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.923502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.923866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.923895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.924311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.924342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.924703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.924731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.925105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.925135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.925523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.925553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.925909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.925939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.926292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.926322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.926717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.926745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.927121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.927150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.927556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.927586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.927938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.927966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.928313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.928343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.928643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.928671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.929017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.929046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.929394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.929424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.929838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.929867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.930203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.930236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.930611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.930639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.931044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.931073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.931438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.931470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.931846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.931875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.038 qpair failed and we were unable to recover it. 00:30:47.038 [2024-11-20 16:25:22.932240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.038 [2024-11-20 16:25:22.932269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.039 qpair failed and we were unable to recover it. 00:30:47.039 [2024-11-20 16:25:22.932505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.039 [2024-11-20 16:25:22.932537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.039 qpair failed and we were unable to recover it. 00:30:47.039 [2024-11-20 16:25:22.932779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.039 [2024-11-20 16:25:22.932811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.039 qpair failed and we were unable to recover it. 00:30:47.039 [2024-11-20 16:25:22.933175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.039 [2024-11-20 16:25:22.933206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.039 qpair failed and we were unable to recover it. 00:30:47.039 [2024-11-20 16:25:22.933558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.039 [2024-11-20 16:25:22.933598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.039 qpair failed and we were unable to recover it. 00:30:47.039 [2024-11-20 16:25:22.934004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.039 [2024-11-20 16:25:22.934032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.039 qpair failed and we were unable to recover it. 00:30:47.039 [2024-11-20 16:25:22.934345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.039 [2024-11-20 16:25:22.934376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.039 qpair failed and we were unable to recover it. 00:30:47.039 [2024-11-20 16:25:22.934779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.039 [2024-11-20 16:25:22.934808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.039 qpair failed and we were unable to recover it. 00:30:47.039 [2024-11-20 16:25:22.935198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.039 [2024-11-20 16:25:22.935236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.039 qpair failed and we were unable to recover it. 00:30:47.039 [2024-11-20 16:25:22.935532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.039 [2024-11-20 16:25:22.935561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.039 qpair failed and we were unable to recover it. 00:30:47.039 [2024-11-20 16:25:22.935925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.039 [2024-11-20 16:25:22.935954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.039 qpair failed and we were unable to recover it. 00:30:47.039 [2024-11-20 16:25:22.936324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.039 [2024-11-20 16:25:22.936355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.039 qpair failed and we were unable to recover it. 00:30:47.039 [2024-11-20 16:25:22.936740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.039 [2024-11-20 16:25:22.936769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.039 qpair failed and we were unable to recover it. 00:30:47.039 [2024-11-20 16:25:22.937136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.039 [2024-11-20 16:25:22.937175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.039 qpair failed and we were unable to recover it. 00:30:47.039 [2024-11-20 16:25:22.937505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.039 [2024-11-20 16:25:22.937534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.039 qpair failed and we were unable to recover it. 00:30:47.039 [2024-11-20 16:25:22.937662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.039 [2024-11-20 16:25:22.937695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.039 qpair failed and we were unable to recover it. 00:30:47.039 [2024-11-20 16:25:22.938139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.039 [2024-11-20 16:25:22.938182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.039 qpair failed and we were unable to recover it. 00:30:47.039 [2024-11-20 16:25:22.938515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.039 [2024-11-20 16:25:22.938545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.039 qpair failed and we were unable to recover it. 00:30:47.039 [2024-11-20 16:25:22.938878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.039 [2024-11-20 16:25:22.938907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.039 qpair failed and we were unable to recover it. 00:30:47.039 [2024-11-20 16:25:22.939283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.039 [2024-11-20 16:25:22.939316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.039 qpair failed and we were unable to recover it. 00:30:47.039 [2024-11-20 16:25:22.939677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.039 [2024-11-20 16:25:22.939706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.039 qpair failed and we were unable to recover it. 00:30:47.039 [2024-11-20 16:25:22.940067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.039 [2024-11-20 16:25:22.940097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.039 qpair failed and we were unable to recover it. 00:30:47.039 [2024-11-20 16:25:22.940478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.039 [2024-11-20 16:25:22.940509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.940881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.940913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.941273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.941303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.941676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.941705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.942066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.942095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.942442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.942472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.942831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.942859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.943101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.943134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.943512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.943542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.943911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.943940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.944317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.944348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.944718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.944747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.945100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.945129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.945503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.945534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.945841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.945870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.946214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.946243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.946608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.946637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.947002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.947030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.947387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.947418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.947757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.947787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.948175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.948205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.948568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.948597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.948837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.948869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.949233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.949264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.949697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.949726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.950088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.312 [2024-11-20 16:25:22.950117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.312 qpair failed and we were unable to recover it. 00:30:47.312 [2024-11-20 16:25:22.950479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.950517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.950849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.950877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.951226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.951257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.951632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.951662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.952022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.952051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.952390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.952419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.952781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.952809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.953184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.953215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.953572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.953600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.953944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.953973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.954335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.954366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.954738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.954768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.955130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.955192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.955547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.955577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.955940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.955969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.956312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.956343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.956701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.956730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.957177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.957209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.957600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.957629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.957989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.958017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.958343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.958373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.958739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.958768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.959129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.959167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.959518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.959547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.959967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.959996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.960347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.960377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.960748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.960777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.961131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.961169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.961524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.961553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.961919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.961947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.962301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.313 [2024-11-20 16:25:22.962333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.313 qpair failed and we were unable to recover it. 00:30:47.313 [2024-11-20 16:25:22.962707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.962736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.962982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.963011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.963388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.963418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.963782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.963811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.964151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.964191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.964588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.964616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.964995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.965024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.965405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.965436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.965809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.965839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.966181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.966219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.966583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.966612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.966978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.967007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.967381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.967411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.967760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.967789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.968177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.968207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.968558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.968588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.968824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.968855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.969204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.969236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.969568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.969598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.969958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.969986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.970243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.970273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.970645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.970674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.971029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.971058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.971327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.971358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.971733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.971762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.972122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.972151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.972523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.972554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.972913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.972941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.973300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.973330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.973682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.973711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.974073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.974103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.974470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.974500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.974798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.974829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.975197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.975228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.975580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.975609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.975901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.975931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.976284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.976316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.976673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.314 [2024-11-20 16:25:22.976702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.314 qpair failed and we were unable to recover it. 00:30:47.314 [2024-11-20 16:25:22.977077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.977106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.977468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.977497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.977846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.977875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.978237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.978269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.978662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.978691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.979068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.979097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.979449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.979480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.979824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.979853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.980219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.980250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.980502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.980534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.980897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.980926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.981368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.981404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.981689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.981718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.982078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.982107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.982491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.982521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.982869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.982899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.983310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.983342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.983716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.983745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.984085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.984115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.984375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.984405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.984745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.984774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.985123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.985152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.985375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.985407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.985764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.985794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.986134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.986175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.986553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.986583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.986942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.986971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.987331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.987362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.987728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.987758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.988127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.988155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.988520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.988549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.988917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.988945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.989292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.989322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.989691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.989720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.990087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.990115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.990480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.990510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.990753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.990784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.315 [2024-11-20 16:25:22.991153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.315 [2024-11-20 16:25:22.991208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.315 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:22.991607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:22.991637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:22.992022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:22.992052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:22.992414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:22.992444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:22.992818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:22.992847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:22.993256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:22.993287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:22.993557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:22.993585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:22.993958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:22.993987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:22.994321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:22.994353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:22.994718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:22.994746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:22.995107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:22.995137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:22.995508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:22.995537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:22.995676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:22.995707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:22.996083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:22.996113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:22.996491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:22.996529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:22.996906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:22.996935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:22.997294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:22.997324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:22.997673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:22.997702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:22.998063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:22.998092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:22.998444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:22.998473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:22.998829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:22.998858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:22.999220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:22.999252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:22.999616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:22.999645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:23.000016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:23.000046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:23.000382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:23.000413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:23.000813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:23.000843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:23.001198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:23.001230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:23.001632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:23.001660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:23.001995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:23.002025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:23.002264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:23.002296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:23.002677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:23.002706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:23.003073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:23.003102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:23.003468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:23.003497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:23.003845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:23.003874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:23.004314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:23.004344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:23.004718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:23.004747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:23.005106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:23.005134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.316 qpair failed and we were unable to recover it. 00:30:47.316 [2024-11-20 16:25:23.005507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.316 [2024-11-20 16:25:23.005537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.005910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.005940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.006277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.006308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.006685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.006714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.007077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.007107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.007506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.007536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.007895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.007924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.008259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.008289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.008661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.008690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.009063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.009092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.009456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.009487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.009776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.009806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.010153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.010196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.010588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.010617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.010985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.011013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.011351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.011381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.011762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.011791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.012154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.012194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.012443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.012475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.012829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.012858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.013225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.013255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.013632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.013661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.014005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.014035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.014383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.014414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.014767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.014795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.015176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.015207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.015572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.015602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.015974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.016003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.016264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.016295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.016689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.016719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.017072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.017102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.017494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.017525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.017877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.017907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.018268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.317 [2024-11-20 16:25:23.018298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.317 qpair failed and we were unable to recover it. 00:30:47.317 [2024-11-20 16:25:23.018677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.018706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.019058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.019088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.019450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.019481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.019838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.019867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.020132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.020172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.020555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.020584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.020958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.020986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.021339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.021368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.021600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.021632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.022003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.022033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.022372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.022408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.022648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.022679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.022974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.023003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.023362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.023391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.023758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.023787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.024143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.024180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.024547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.024576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.024940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.024969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.025330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.025361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.025701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.025730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.026092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.026122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.026484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.026516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.026863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.026891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.027259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.027291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.027649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.027679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.028025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.028053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.028418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.028449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.028788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.028817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.029181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.029212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.029572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.029600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.029968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.029998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.030369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.030400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.030768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.030796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.031205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.031236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.031599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.031628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.032004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.032032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.032259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.032288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.032665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.032694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.033060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.033088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.033437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.033468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.033809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.033838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.034201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.034231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.034627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.034655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.035011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.035041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.035419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.035449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.035845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.035874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.036233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.036264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.036646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.036676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.037040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.037069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.318 [2024-11-20 16:25:23.037432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.318 [2024-11-20 16:25:23.037464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.318 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.037817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.037852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.038201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.038232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.038585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.038615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.038971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.039001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.039366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.039397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.039751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.039780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.040019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.040051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.040414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.040445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.040808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.040837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.041068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.041098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.041343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.041374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.041768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.041798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.042170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.042201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.042603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.042633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.042989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.043019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.043364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.043395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.043744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.043774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.044021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.044053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.044439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.044471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.044828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.044858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.045213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.045245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.045652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.045682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.046047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.046077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.046443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.046474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.046841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.046870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.047225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.047256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.047608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.047639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.047998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.048028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.048418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.048449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.048797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.048828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.049189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.049220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.049584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.049614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.049980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.050009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.050384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.050414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.050750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.050779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.051150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.051189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.051553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.051583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.051805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.051837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.052092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.052121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.052470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.319 [2024-11-20 16:25:23.052501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.319 qpair failed and we were unable to recover it. 00:30:47.319 [2024-11-20 16:25:23.052745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.052782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.053177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.053209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.053638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.053667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.054021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.054049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.054420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.054451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.054809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.054838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.055197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.055226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.055587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.055616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.055979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.056008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.056382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.056411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.056756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.056785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.057144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.057200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.057512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.057540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.057921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.057950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.058291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.058330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.058696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.058725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.059091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.059120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.059371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.059404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.059812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.059841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.060187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.060217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.060573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.060602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.060970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.061000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.061350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.061381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.061747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.061776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.062147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.062188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.062547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.062576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.062982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.063011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.063370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.063402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.063745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.063774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.064024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.064053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.064424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.064454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.064796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.064825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.065186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.065216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.065574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.065603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.065973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.066002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.066363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.066393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.066754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.066783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.067132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.067171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.067523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.320 [2024-11-20 16:25:23.067552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.320 qpair failed and we were unable to recover it. 00:30:47.320 [2024-11-20 16:25:23.067912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.067942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.068267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.068303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.068668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.068697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.069068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.069098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.069435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.069466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.069712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.069742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.070105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.070134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.070499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.070528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.070887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.070915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.071292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.071323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.071572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.071603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.072039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.072068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.072405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.072436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.072836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.072865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.073199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.073230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.073572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.073602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.073984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.074013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.074392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.074423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.074689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.074718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.075069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.075097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.075468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.075498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.075763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.075792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.076128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.076191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.076606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.076636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.076923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.076952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.077320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.077352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.321 [2024-11-20 16:25:23.077718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.321 [2024-11-20 16:25:23.077748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.321 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.078110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.078141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.078512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.078541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.078911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.078940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.079284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.079315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.079584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.079614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.079976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.080005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.080389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.080420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.080804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.080833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.081190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.081223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.081489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.081519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.081859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.081890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.082258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.082288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.082691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.082720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.083064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.083099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.083457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.083493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.083834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.083864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.084127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.084155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.084532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.084570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.084939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.084968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.085337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.085368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.085723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.085753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.086110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.086140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.086499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.086529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.086874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.086903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.087284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.087314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.087720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.087750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.088118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.088146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.088542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.088571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.089000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.089030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.089383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.089416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.089754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.089782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.090143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.090185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.090554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.090582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.091023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.091051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.091423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.091453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.091804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.091833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.092192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.092223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.092631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.092660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.093030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.093060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.093314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.322 [2024-11-20 16:25:23.093347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.322 qpair failed and we were unable to recover it. 00:30:47.322 [2024-11-20 16:25:23.093717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.093747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.094110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.094140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.094500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.094530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.094969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.094998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.095370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.095400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.095595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.095624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.096012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.096042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.096440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.096470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.096832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.096860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.097232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.097263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.097627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.097656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.098021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.098050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.098386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.098416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.098653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.098682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.099041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.099077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.099428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.099458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.099877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.099907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.100261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.100291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.100520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.100548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.100914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.100943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.101304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.101335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.101597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.101626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.102004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.102034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.102382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.102414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.102780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.102810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.103182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.103214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.103571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.103601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.103931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.103961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.104311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.104342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.104695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.104725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.105104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.105133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.105493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.105524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.105899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.105927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.106305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.106335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.106710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.106739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.107103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.107139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.107388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.107417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.107776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.107806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.108054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.108083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.108480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.108510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.108871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.323 [2024-11-20 16:25:23.108901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.323 qpair failed and we were unable to recover it. 00:30:47.323 [2024-11-20 16:25:23.109269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.109301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.109731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.109761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.110119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.110147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.110526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.110555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.110825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.110854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.111222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.111252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.111628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.111658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.112023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.112060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.112451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.112481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.112709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.112737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.113115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.113144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.113525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.113555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.113884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.113913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.114276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.114313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.114677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.114707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.115035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.115064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.115429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.115460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.115823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.115853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.116142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.116183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.116534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.116563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.116928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.116959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.117326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.117356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.117600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.117629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.118043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.118071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.118449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.118480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.118712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.118740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.119097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.119126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.119518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.119550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.119923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.119952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.120305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.120337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.120758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.120787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.121053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.324 [2024-11-20 16:25:23.121082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.324 qpair failed and we were unable to recover it. 00:30:47.324 [2024-11-20 16:25:23.121433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.121465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.121828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.121857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.122221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.122250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.122613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.122642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.123005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.123035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.123395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.123425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.123785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.123814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.124203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.124234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.124627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.124656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.125028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.125058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.125333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.125364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.125729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.125760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.126115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.126145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.126550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.126580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.126841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.126874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.127238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.127270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.127634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.127665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.128011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.128040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.128389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.128421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.128797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.128826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.129194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.129224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.129620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.129655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.130008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.130038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.130384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.130414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.130777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.130807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.131178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.131208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.131457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.131486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.131845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.131874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.132226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.132257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.132614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.132643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.132901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.132929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.133282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.133313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.133687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.133716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.134094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.134123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.325 [2024-11-20 16:25:23.134500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.325 [2024-11-20 16:25:23.134531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.325 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.134886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.134917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.135284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.135315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.135508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.135538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.135936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.135966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.136327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.136359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.136743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.136772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.137155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.137197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.137556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.137587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.137977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.138008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.138237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.138269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.138654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.138684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.138991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.139021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.139384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.139415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.139782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.139813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.140183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.140214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.140541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.140570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.140919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.140949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.141329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.141360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.141712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.141742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.141862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.141893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.142223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.142254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.142655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.142686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.143034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.143063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.143414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.143446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.143827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.143857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.144194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.144225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.144592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.144628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.145007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.145038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.145407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.145438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.145810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.145839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.146210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.146241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.146615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.146644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.147008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.147038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.326 qpair failed and we were unable to recover it. 00:30:47.326 [2024-11-20 16:25:23.147431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.326 [2024-11-20 16:25:23.147462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.147698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.147727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.148009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.148038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.148265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.148296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.148698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.148727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.149075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.149106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.149489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.149520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.149896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.149927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.150291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.150322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.150683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.150712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.151078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.151109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.151370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.151400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.151734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.151774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.152108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.152138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.152481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.152511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.152873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.152903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.153278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.153309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.153684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.153714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.154080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.154108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.154477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.154508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.154879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.154909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.155260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.155291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.155650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.155680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.156048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.156077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.156423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.156453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.156854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.156884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.157252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.157283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.157645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.157675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.158040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.158070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.158433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.158462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.158834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.158863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.159230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.159261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.159626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.159654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.160003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.160038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.160301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.160331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.160696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.160725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.160988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.161016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.161382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.327 [2024-11-20 16:25:23.161413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.327 qpair failed and we were unable to recover it. 00:30:47.327 [2024-11-20 16:25:23.161669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.161698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.162052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.162082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.162496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.162526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.162854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.162884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.163237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.163267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.163479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.163510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.163877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.163906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.164281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.164311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.164700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.164728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.165070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.165099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.165481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.165512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.165876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.165904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.166250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.166280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.166612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.166642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.167001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.167030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.167458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.167488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.167838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.167867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.168223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.168252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.168509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.168541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.168904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.168934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.169292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.169321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.169693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.169722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.170082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.170112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.170551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.170581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.170942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.170972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.171343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.171375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.171739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.171768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.172110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.172139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.172440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.172469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.172906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.172935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.173263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.173295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.173661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.173690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.174054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.174082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.174439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.174469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.174805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.174834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.175204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.175241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.175637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.175666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.176041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.328 [2024-11-20 16:25:23.176069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.328 qpair failed and we were unable to recover it. 00:30:47.328 [2024-11-20 16:25:23.176423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.176453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.176808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.176836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.177207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.177238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.177434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.177463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.177838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.177866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.178229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.178259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.178502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.178533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.178904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.178933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.179186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.179218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.179577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.179607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.179959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.179989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.180346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.180378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.180720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.180750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.181134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.181180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.181481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.181511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.181891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.181920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.182274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.182305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.182661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.182690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.183035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.183064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.183442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.183472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.183837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.183866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.184217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.184247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.184621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.184650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.185012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.185041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.185420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.185451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.185796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.185825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.186183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.186213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.186619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.186647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.187005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.187034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.187384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.187415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.187832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.187861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.188108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.188139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.188508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.188538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.188923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.188951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.189313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.189343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.189698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.189728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.190098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.190127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.329 [2024-11-20 16:25:23.190499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.329 [2024-11-20 16:25:23.190537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.329 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.190883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.190912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.191346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.191377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.191733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.191762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.192126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.192154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.192407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.192436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.192710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.192740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.193073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.193102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.193480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.193510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.193865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.193895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.194263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.194296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.194655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.194684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.195061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.195091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.195431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.195462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.195820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.195850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.196205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.196236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.196614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.196643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.197009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.197038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.197468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.197499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.197856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.197886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.198228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.198258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.198618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.198648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.199021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.199051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.199296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.199328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.199705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.199734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.200096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.200127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.200495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.200524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.200905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.200935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.201296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.201327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.330 qpair failed and we were unable to recover it. 00:30:47.330 [2024-11-20 16:25:23.201674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.330 [2024-11-20 16:25:23.201704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.202108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.202137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.202386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.202416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.202773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.202802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.203175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.203207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.203566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.203594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.203955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.203985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.204350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.204383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.204622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.204650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.204997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.205027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.205381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.205414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.205762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.205792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.206170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.206202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.206569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.206599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.206962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.206992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.207365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.207396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.207841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.207870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.208270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.208301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.208643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.208681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.209010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.209039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.209368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.209399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.209807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.209836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.210185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.210216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.210577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.210607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.210978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.211007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.211392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.211424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.211749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.211778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.212129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.212172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.212534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.212563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.212802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.212834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.213200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.213232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.213625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.213654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.214015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.214045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.214407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.214437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.214797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.214827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.215193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.215223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.215614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.215644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.216006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.216036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.331 qpair failed and we were unable to recover it. 00:30:47.331 [2024-11-20 16:25:23.216411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.331 [2024-11-20 16:25:23.216454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.216817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.216847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.217215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.217247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.217612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.217641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.218017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.218047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.218399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.218431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.218786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.218816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.219184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.219215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.219580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.219609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.219958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.219987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.220344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.220375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.220737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.220767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.221153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.221202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.221596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.221625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.221982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.222014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.222380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.222411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.222753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.222783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.223119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.223149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.223571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.223601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.223949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.223981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.224325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.224355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.224739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.224769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.225137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.225182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.225543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.225573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.225934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.225964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.226400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.226432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.226773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.226804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.227216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.227248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.227611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.227641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.228010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.228039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.228366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.228399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.228774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.228804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.229177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.229209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.229570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.229600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.229950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.229979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.230367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.230398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.230730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.230760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.231120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.332 [2024-11-20 16:25:23.231150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.332 qpair failed and we were unable to recover it. 00:30:47.332 [2024-11-20 16:25:23.231524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.333 [2024-11-20 16:25:23.231554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.333 qpair failed and we were unable to recover it. 00:30:47.333 [2024-11-20 16:25:23.231911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.333 [2024-11-20 16:25:23.231949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.333 qpair failed and we were unable to recover it. 00:30:47.333 [2024-11-20 16:25:23.232321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.333 [2024-11-20 16:25:23.232358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.333 qpair failed and we were unable to recover it. 00:30:47.333 [2024-11-20 16:25:23.232706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.333 [2024-11-20 16:25:23.232737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.333 qpair failed and we were unable to recover it. 00:30:47.333 [2024-11-20 16:25:23.233115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.333 [2024-11-20 16:25:23.233144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.333 qpair failed and we were unable to recover it. 00:30:47.333 [2024-11-20 16:25:23.233517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.333 [2024-11-20 16:25:23.233546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.333 qpair failed and we were unable to recover it. 00:30:47.333 [2024-11-20 16:25:23.233887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.333 [2024-11-20 16:25:23.233917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.333 qpair failed and we were unable to recover it. 00:30:47.605 [2024-11-20 16:25:23.234279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.605 [2024-11-20 16:25:23.234314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.605 qpair failed and we were unable to recover it. 00:30:47.605 [2024-11-20 16:25:23.234674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.605 [2024-11-20 16:25:23.234705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.605 qpair failed and we were unable to recover it. 00:30:47.605 [2024-11-20 16:25:23.235066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.605 [2024-11-20 16:25:23.235097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.605 qpair failed and we were unable to recover it. 00:30:47.605 [2024-11-20 16:25:23.235434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.605 [2024-11-20 16:25:23.235465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.605 qpair failed and we were unable to recover it. 00:30:47.605 [2024-11-20 16:25:23.235805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.605 [2024-11-20 16:25:23.235835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.605 qpair failed and we were unable to recover it. 00:30:47.605 [2024-11-20 16:25:23.236087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.605 [2024-11-20 16:25:23.236121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.605 qpair failed and we were unable to recover it. 00:30:47.605 [2024-11-20 16:25:23.236469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.605 [2024-11-20 16:25:23.236502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.605 qpair failed and we were unable to recover it. 00:30:47.605 [2024-11-20 16:25:23.236866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.605 [2024-11-20 16:25:23.236897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.605 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.237263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.237295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.237657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.237687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.238050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.238078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.238447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.238480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.238825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.238855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.239229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.239260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.239674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.239704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.240063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.240092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.240452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.240484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.240833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.240864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.241233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.241264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.241638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.241668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.242031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.242060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.242414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.242445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.242818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.242848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.243080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.243111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.243481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.243511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.243874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.243902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.244265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.244294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.244508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.244540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.244903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.244932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.245202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.245233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.245468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.245500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.245842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.245872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.246217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.246248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.246595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.246626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.247034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.247063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.247410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.247447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.247799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.247828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.248181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.248213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.248531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.248562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.248930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.248959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.249329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.249360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.249730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.249758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.250133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.250175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.250539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.250568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.250940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.606 [2024-11-20 16:25:23.250970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.606 qpair failed and we were unable to recover it. 00:30:47.606 [2024-11-20 16:25:23.251302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.251332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.251698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.251727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.252094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.252122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.252495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.252525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.252830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.252859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.253219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.253249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.253626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.253654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.254076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.254106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.254367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.254396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.254749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.254778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.255145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.255186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.255548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.255578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.255943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.255971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.256336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.256366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.256723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.256752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.257096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.257124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.257512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.257543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.257916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.257947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.258314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.258345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.258712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.258742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.259096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.259125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.259501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.259532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.259896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.259926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.260298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.260329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.260754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.260783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.261042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.261074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.261520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.261551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.261808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.261836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.262192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.262222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.262473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.262503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.262876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.262921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.263300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.263331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.263696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.263725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.264098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.264127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.264486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.264518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.264846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.264874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.265238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.265269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.265614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-11-20 16:25:23.265644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.607 qpair failed and we were unable to recover it. 00:30:47.607 [2024-11-20 16:25:23.266035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.266065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.266430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.266461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.266808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.266838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.267209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.267240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.267619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.267648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.268014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.268043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.268391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.268423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.268784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.268812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.269182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.269214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.269563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.269593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.269946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.269975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.270335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.270366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.270731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.270760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.271124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.271151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.271538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.271569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.271934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.271963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.272317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.272347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.272703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.272732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.273092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.273121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.273576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.273608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.273969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.273998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.274378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.274409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.274777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.274806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.275174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.275205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.275561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.275589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.275964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.275993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.276367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.276398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.276771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.276800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.277147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.277186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.277535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.277564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.277929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.277958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.278398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.278429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.278785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.278822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.279176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.279208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.279611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.279640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.279999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.280028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.280400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.280430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.608 [2024-11-20 16:25:23.280787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-11-20 16:25:23.280816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.608 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.281190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.281220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.281604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.281634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.281985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.282014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.282269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.282300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.282523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.282554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.282923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.282954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.283304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.283334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.283694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.283722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.284092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.284122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.284485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.284515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.284881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.284910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.285279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.285308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.285658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.285687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.286056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.286085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.286459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.286489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.286729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.286760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.287102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.287133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.287512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.287542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.287911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.287940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.288293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.288323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.288676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.288705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.288911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.288943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.289310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.289342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.289732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.289761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.290201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.290233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.290602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.290632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.290988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.291017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.291295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.291325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.291606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.291634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.291986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.292015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.292383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.292415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.292661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.292692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.293145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.293184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.293520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.293550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.293908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.293949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.294314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.294346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.294781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.294810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.609 [2024-11-20 16:25:23.295181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-11-20 16:25:23.295211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.609 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.295456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.295487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.295846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.295875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.296139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.296183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.296471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.296500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.296858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.296888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.297238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.297268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.297645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.297675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.298048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.298076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.298491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.298522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.298884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.298914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.299286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.299316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.299665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.299694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.300051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.300080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.300448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.300478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.300848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.300876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.301143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.301181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.301536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.301565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.301923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.301952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.302313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.302343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.302704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.302734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.303093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.303123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.303544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.303574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.303930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.303958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.304296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.304327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.304692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.304721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.305075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.305104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.305438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.305468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.305836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.305865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.306220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.306250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.306514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.306542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.306887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.306916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.610 [2024-11-20 16:25:23.307286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-11-20 16:25:23.307315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.610 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.307682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.307711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.308065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.308093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.308460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.308490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.308739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.308771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.309124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.309170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.309535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.309564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.309928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.309957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.310325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.310356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.310720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.310750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.311151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.311190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.311572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.311601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.311953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.311984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.312313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.312343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.312703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.312732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.313012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.313040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.313417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.313447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.313809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.313837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.314219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.314249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.314639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.314669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.315025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.315054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.315414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.315444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.315799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.315827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.316191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.316221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.316654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.316683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.316969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.316997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.317325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.317355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.317713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.317743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.318116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.318145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.318525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.318554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.318893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.318923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.319174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.319208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.319593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.319622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.319985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.320014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.320395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.320427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.320787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.320816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.321186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.321216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.321574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.611 [2024-11-20 16:25:23.321603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.611 qpair failed and we were unable to recover it. 00:30:47.611 [2024-11-20 16:25:23.321966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.321996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.322337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.322367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.322705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.322734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.322990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.323020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.323310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.323340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.323693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.323723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.324091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.324121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.324492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.324529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.324887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.324917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.325270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.325301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.325701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.325732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.326089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.326119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.326438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.326472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.326730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.326761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.327119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.327149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.327525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.327556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.327917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.327947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.328315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.328346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.328724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.328754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.328985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.329017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.329384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.329417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.329763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.329794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.330057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.330087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.330467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.330499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.330834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.330864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.331221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.331252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.331646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.331676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.332021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.332051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.332409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.332441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.332805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.332834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.333109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.333139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.333534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.333565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.333926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.333956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.334203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.334236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.334631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.334660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.335027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.335057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.335414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.335445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.335808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.612 [2024-11-20 16:25:23.335838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.612 qpair failed and we were unable to recover it. 00:30:47.612 [2024-11-20 16:25:23.336205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.336235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.336617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.336646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.336987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.337016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.337138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.337181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.337596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.337627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.337869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.337900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.338323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.338354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.338739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.338768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.339038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.339067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.339445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.339483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.339844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.339874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.340205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.340236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.340478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.340508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.340862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.340891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.341254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.341285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.341669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.341698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.341960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.341989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.342239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.342270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.342620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.342650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.343011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.343040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.343340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.343370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.343723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.343752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.344139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.344178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.344543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.344573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.344992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.345021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.345424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.345454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.345821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.345849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.346218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.346250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.346629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.346661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.347001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.347029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.347268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.347298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.347661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.347691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.348064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.348092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.348467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.348497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.348855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.348884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.349070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.349098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.349464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.349495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.349858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.613 [2024-11-20 16:25:23.349888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.613 qpair failed and we were unable to recover it. 00:30:47.613 [2024-11-20 16:25:23.350237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.350269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.350516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.350545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.350898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.350929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.351201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.351233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.351619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.351648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.352026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.352055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.352413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.352444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.352812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.352841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.353273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.353303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.353676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.353705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.353813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.353842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.354215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.354252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.354586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.354615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.354961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.354991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.355235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.355265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.355642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.355670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.355919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.355949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.356305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.356335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.356693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.356725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.357090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.357119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.357409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.357439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.357790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.357821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.358196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.358227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.358635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.358665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.359006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.359036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.359301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.359332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.359757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.359785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.360046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.360074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.360439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.360470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.360764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.360793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.361178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.361209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.361556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.361585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.361791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.361820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.614 qpair failed and we were unable to recover it. 00:30:47.614 [2024-11-20 16:25:23.362186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.614 [2024-11-20 16:25:23.362216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.362553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.362582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.362848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.362877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.363248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.363280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.363631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.363661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.363997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.364026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.364383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.364413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.364786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.364816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.365181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.365213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.365457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.365487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.365937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.365967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.366315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.366346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.366732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.366761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.367136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.367176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.367423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.367452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.367721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.367750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.368146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.368188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.368591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.368620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.368861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.368901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.369154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.369199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.369563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.369592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.369936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.369966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.370329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.370360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.370777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.370806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.371180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.371210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.371589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.371618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.371998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.372027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.372396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.372427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.372788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.372817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.373179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.373211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.373590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.373618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.373990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.374020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.374383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.374415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.374779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.374808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.375185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.375216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.375598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.375628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.376002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.376031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.615 qpair failed and we were unable to recover it. 00:30:47.615 [2024-11-20 16:25:23.376289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.615 [2024-11-20 16:25:23.376320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.376700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.376730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.376982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.377013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.377387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.377417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.377657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.377687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.377990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.378018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.378425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.378456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.378845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.378874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.379215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.379246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.379624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.379653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.380014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.380043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.380407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.380437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.380777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.380807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.381182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.381212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.381551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.381582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.381945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.381975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.382366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.382397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.382761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.382790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.383179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.383209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.383563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.383593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.383961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.383990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.384423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.384459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.384832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.384862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.385204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.385234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.385590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.385620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.386010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.386039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.386297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.386328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.386690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.386718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.386977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.387006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.387390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.387421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.387794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.387824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.388057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.388086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.388354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.388385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.388743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.388772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.389120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.389149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.389555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.389585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.389938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.389967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.390332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.616 [2024-11-20 16:25:23.390363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.616 qpair failed and we were unable to recover it. 00:30:47.616 [2024-11-20 16:25:23.390737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.390767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.391130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.391170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.391537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.391566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.391929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.391959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.392322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.392352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.392712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.392741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.393104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.393132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.393540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.393569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.393937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.393966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.394319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.394349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.394755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.394785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.395151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.395191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.395546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.395575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.395939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.395967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.396337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.396366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.396721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.396750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.397119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.397149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.397539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.397567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.397931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.397960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.398313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.398345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.398633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.398662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.399014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.399043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.399418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.399448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.399799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.399834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.400191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.400222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.400567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.400596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.400953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.400983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.401243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.401273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.401657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.401686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.402049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.402078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.402449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.402480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.402853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.402882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.403230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.403262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.403586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.403614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.403901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.403930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.404279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.404309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.404634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.404664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.404994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.617 [2024-11-20 16:25:23.405023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.617 qpair failed and we were unable to recover it. 00:30:47.617 [2024-11-20 16:25:23.405388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.405419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.405675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.405707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.406054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.406084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.406445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.406477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.406817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.406846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.407182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.407213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.407545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.407575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.407940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.407970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.408355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.408385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.408748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.408777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.409048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.409076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.409445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.409476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.409851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.409887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.410248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.410279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.410648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.410677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.411016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.411045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.411409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.411440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.411779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.411810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.412181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.412211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.412467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.412498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.412847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.412876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.413121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.413149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.413508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.413538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.413878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.413908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.414280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.414311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.414674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.414703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.415066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.415095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.415454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.415484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.415849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.415878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.416237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.416267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.416648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.416677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.417042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.417072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.417414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.417444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.417805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.417835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.418206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.418237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.418612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.418641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.419009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.419037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.419382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.618 [2024-11-20 16:25:23.419414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.618 qpair failed and we were unable to recover it. 00:30:47.618 [2024-11-20 16:25:23.419786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.419814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.420079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.420107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.420532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.420563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.420931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.420959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.421296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.421326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.421676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.421706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.422042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.422071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.422416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.422446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.422812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.422841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.423205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.423235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.423588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.423617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.423977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.424005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.424296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.424326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.424665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.424702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.425037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.425071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.425417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.425447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.425792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.425821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.426201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.426232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.426597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.426628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.427002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.427031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.427428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.427458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.427811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.427849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.428191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.428222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.428608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.428637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.428851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.428881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.429274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.429304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.429675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.429704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.430072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.430100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.430534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.430566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.430921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.430949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.431289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.431320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.431695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.431724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.432085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.432113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.619 [2024-11-20 16:25:23.432579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.619 [2024-11-20 16:25:23.432609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.619 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.432949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.432987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.433372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.433402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.433761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.433790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.434148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.434189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.434546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.434575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.434934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.434964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.435330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.435361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.435773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.435802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.436146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.436184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.436560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.436589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.436958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.436987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.437338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.437367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.437749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.437778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.438131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.438180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.438589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.438618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.438985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.439014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.439384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.439415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.439780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.439810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.440171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.440201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.440541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.440570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.440930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.440965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.441324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.441354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.441725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.441754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.442120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.442149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.442594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.442624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.442847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.442880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.443264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.443295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.443660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.443690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.444059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.444089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.444458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.444488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.444755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.444784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.445126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.445156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.445493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.445522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.445881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.445912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.446274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.446304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.446663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.446692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.447059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.447087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.620 [2024-11-20 16:25:23.447432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.620 [2024-11-20 16:25:23.447462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.620 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.447862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.447891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.448246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.448277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.448637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.448665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.449030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.449059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.449418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.449448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.449818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.449846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.450213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.450243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.450617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.450646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.451022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.451050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.451403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.451435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.451841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.451870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.452228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.452258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.452636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.452664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.453076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.453105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.453469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.453498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.453854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.453883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.454250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.454281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.454663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.454691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.455070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.455099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.455440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.455470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.455868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.455897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.456251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.456283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.456640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.456675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.457039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.457068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.457440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.457470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.457829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.457857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.458232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.458262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.458608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.458637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.458999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.459027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.459404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.459434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.459794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.459824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.460177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.460209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.460539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.460568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.460912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.460943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.461299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.461329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.461700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.461728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.462097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.462126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.621 qpair failed and we were unable to recover it. 00:30:47.621 [2024-11-20 16:25:23.462494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.621 [2024-11-20 16:25:23.462524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.462893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.462923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.463283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.463312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.463695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.463723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.464082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.464110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.464462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.464492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.464850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.464880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.465237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.465268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.465635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.465664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.466019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.466048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.466412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.466442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.466811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.466840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.467209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.467240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.467670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.467700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.468052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.468080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.468431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.468461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.468816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.468844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.469204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.469235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.469589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.469618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.469869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.469900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.470261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.470291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.470661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.470690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.471050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.471078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.471430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.471461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.471821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.471851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.472217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.472253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.472626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.472654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.473007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.473036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.473381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.473412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.473778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.473808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.474175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.474205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.474555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.474585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.474992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.475021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.475380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.475411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.475774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.475803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.476179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.476209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.476571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.476601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.476976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.477005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.622 qpair failed and we were unable to recover it. 00:30:47.622 [2024-11-20 16:25:23.477359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.622 [2024-11-20 16:25:23.477390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.477735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.477764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.478124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.478153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.478565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.478594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.478957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.478986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.479241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.479272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.479640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.479669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.480043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.480072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.480419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.480450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.480679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.480711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.481068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.481098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.481471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.481503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.481864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.481892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.482245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.482277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.482633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.482662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.483019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.483048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.483410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.483440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.483780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.483808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.484178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.484208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.484466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.484495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.484741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.484769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.485131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.485170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.485506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.485535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.485900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.485928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.486293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.486323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.486666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.486695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.487128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.487168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.487526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.487563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.487901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.487931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.488300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.488330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.488700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.488729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.489090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.489120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.489482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.489513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.489885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.489915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.490276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.490307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.490519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.490551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.490926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.490955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.491319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.623 [2024-11-20 16:25:23.491349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.623 qpair failed and we were unable to recover it. 00:30:47.623 [2024-11-20 16:25:23.491715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.491744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.492121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.492149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.492408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.492440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.492710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.492739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.493079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.493110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.493517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.493547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.493927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.493956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.494327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.494357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.494701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.494730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.495071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.495101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.495477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.495507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.495872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.495902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.496134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.496173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.496526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.496555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.496810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.496841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.497086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.497115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.497508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.497538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.497883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.497912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.498261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.498291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.498664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.498692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.499053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.499081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.499435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.499465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.499830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.499859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.500230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.500259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.500653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.500682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.500921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.500953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.501334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.501364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.501706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.501736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.502071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.502100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.502466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.502503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.502846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.502875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.503233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.503264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.503604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.503634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.624 [2024-11-20 16:25:23.503994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.624 [2024-11-20 16:25:23.504024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.624 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.504390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.504420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.504786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.504814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.505078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.505107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.505465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.505494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.505883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.505912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.506271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.506302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.506667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.506696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.506821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.506851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.507217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.507250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.507616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.507645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.508012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.508041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.508406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.508438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.508811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.508839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.509192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.509223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.509611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.509640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.510008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.510037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.510291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.510321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.510694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.510723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.511069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.511100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.511334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.511367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.511702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.511732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.511983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.512015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.512393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.512424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.512790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.512819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.513175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.513205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.513464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.513495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.513861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.513890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.514254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.514293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.514691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.514720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.515051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.515081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.515468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.515498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.515854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.515883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.516322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.516353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.516712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.516740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.517105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.517133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.517369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.517406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.517755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.517785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.625 [2024-11-20 16:25:23.518177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.625 [2024-11-20 16:25:23.518207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.625 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.518550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.518580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.518949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.518978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.519338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.519369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.519722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.519753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.520111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.520142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.520384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.520413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.520745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.520775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.521145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.521188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.521434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.521466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.521811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.521842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.522192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.522224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.522593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.522623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.522990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.523018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.523383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.523414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.523783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.523812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.524084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.524116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.524510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.524541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.524905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.525001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.525421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.525462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.525842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.525874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.526293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.526354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.526619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.526649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.527000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.527030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.626 [2024-11-20 16:25:23.527371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.626 [2024-11-20 16:25:23.527401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.626 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.527743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.527777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.528120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.528171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.528488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.528518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.528883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.528912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.529284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.529313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.529665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.529694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.529942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.529975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.530320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.530353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.530689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.530721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.531071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.531100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.531446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.531477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.531827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.531856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.532237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.532267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.532483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.532511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.532898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.532927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.533275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.533306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.533681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.533709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.534080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.534108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.534413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.534443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.534796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.534825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.535198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.535229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.535573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.535601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.535998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.536027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.536315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.536345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.536702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.536732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.537105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.537135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.537580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.900 [2024-11-20 16:25:23.537611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.900 qpair failed and we were unable to recover it. 00:30:47.900 [2024-11-20 16:25:23.537860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.537895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.538118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.538147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.538529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.538559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.538919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.538949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.539303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.539333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.539711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.539740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.540070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.540099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.540446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.540476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.540850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.540879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.541242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.541272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.541623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.541653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.542053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.542082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.542270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.542300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.542670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.542699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.542938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.542966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.543239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.543272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.543617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.543646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.543902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.543935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.544281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.544310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.544528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.544559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.544796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.544825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.545231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.545262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.545634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.545663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.545994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.546026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.546398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.546428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.546801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.546831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.547186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.547216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.547585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.547621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.547989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.548018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.548391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.548421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.548661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.548693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.549093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.549122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.549500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.549529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.549887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.549915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.550287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.550318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.550661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.550691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.550945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.550977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.551225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.901 [2024-11-20 16:25:23.551255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.901 qpair failed and we were unable to recover it. 00:30:47.901 [2024-11-20 16:25:23.551521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.551551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.551914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.551943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.552285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.552316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.552674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.552703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.553085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.553113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.553352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.553381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.553765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.553793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.554201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.554232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.554553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.554583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.554918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.554947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.555228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.555258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.555633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.555662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.556032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.556062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.556326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.556355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.556600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.556628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.556984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.557014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.557347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.557377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.557738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.557768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.558110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.558139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.558528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.558559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.558809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.558838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.559183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.559215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.559585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.559614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.559993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.560023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.560271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.560302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.560665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.560693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.561069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.561097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.561363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.561399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.561766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.561795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.562194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.562224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.562585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.562615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.562854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.562884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.563227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.563258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.563639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.563668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.564039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.564067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.564407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.564436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.564798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.564827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.565202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.565233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.902 [2024-11-20 16:25:23.565591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.902 [2024-11-20 16:25:23.565619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.902 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.565992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.566021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.566377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.566408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.566791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.566820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.567197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.567227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.567582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.567611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.567944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.567975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.568321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.568352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.568723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.568752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.569108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.569136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.569573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.569602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.569776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.569804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.570216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.570246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.570620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.570649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.571007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.571036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.571409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.571439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.571799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.571828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.572186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.572216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.572603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.572631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.573005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.573039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.573379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.573410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.573791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.573820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.574179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.574210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.574585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.574614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.574992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.575022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.575383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.575412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.575680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.575713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.576045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.576075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.576424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.576454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.576826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.576855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.577134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.577207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.577547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.577576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.577805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.577839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.578197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.578229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.578526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.578555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.578907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.578936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.579295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.579326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.579707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.579735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.903 qpair failed and we were unable to recover it. 00:30:47.903 [2024-11-20 16:25:23.579991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.903 [2024-11-20 16:25:23.580019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.580386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.580417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.580798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.580827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.581179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.581209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.581432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.581464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.581626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.581654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.582012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.582041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.582389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.582420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.582757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.582792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.583131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.583172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.583513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.583542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.583888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.583917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.584279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.584310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.584649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.584680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.585058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.585086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.585446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.585477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.585846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.585874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.586246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.586276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.586633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.586661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.587010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.587038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.587403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.587433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.587692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.587723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.588089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.588118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.588498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.588531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.588918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.588947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.589343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.589374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.589744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.589773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.590149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.590188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.590573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.590602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.590953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.590982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.591349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.591380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.591772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.591801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.904 [2024-11-20 16:25:23.592061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.904 [2024-11-20 16:25:23.592090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.904 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.592461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.592492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.592857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.592887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.593253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.593290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.593621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.593651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.593994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.594023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.594262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.594295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.594638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.594668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.595043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.595071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.595325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.595355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.595727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.595756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.596131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.596171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.596555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.596584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.596935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.596965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.597321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.597353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.597703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.597732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.598085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.598114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.598518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.598550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.598924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.598954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.599200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.599233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.599585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.599614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.599956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.599985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.600389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.600419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.600681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.600710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.601037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.601066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.601427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.601458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.601801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.601831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.602182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.602212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.602538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.602567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.602931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.602960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.603317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.603346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.603728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.603758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.604019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.604048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.604422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.604452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.604785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.604815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.605191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.605222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.605584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.605614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.605954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.605983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.606333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.905 [2024-11-20 16:25:23.606366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.905 qpair failed and we were unable to recover it. 00:30:47.905 [2024-11-20 16:25:23.606713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.606741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.607072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.607101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.607455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.607487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.607685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.607713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.608102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.608131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.608551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.608582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.608939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.608967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.609227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.609261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.609604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.609633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.610007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.610036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.610179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.610210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.610557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.610587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.610838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.610868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.611234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.611264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.611513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.611542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.611886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.611914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.612276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.612308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.612681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.612710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.613050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.613079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.613450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.613481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.613860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.613891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.614150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.614190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.614577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.614607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.614953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.614982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.615366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.615395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.615757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.615786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.616156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.616195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.616529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.616566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.616903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.616932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.617307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.617338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.617712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.617740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.617974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.618007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.618386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.618423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.618777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.618806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.619180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.619211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.619580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.619609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.619983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.620012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.620352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.620382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.906 qpair failed and we were unable to recover it. 00:30:47.906 [2024-11-20 16:25:23.620774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.906 [2024-11-20 16:25:23.620805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.621055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.621084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.621394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.621424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.621641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.621669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.622046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.622075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.622438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.622468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.622847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.622877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.623114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.623143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.623556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.623586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.623924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.623954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.624322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.624353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.624773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.624801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.625200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.625233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.625596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.625625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.625888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.625917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.626183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.626216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.626566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.626594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.627033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.627062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.627404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.627434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.627785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.627814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.628056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.628085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.628326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.628368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.628720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.628749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.629092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.629121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.629521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.629551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.629908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.629938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.630283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.630314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.630583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.630610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.630972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.631002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.631380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.631411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.631786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.631815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.632071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.632099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.632492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.632523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.632761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.632789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.633123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.633152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.633503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.633535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.633780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.633810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.634057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.634087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.634457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.907 [2024-11-20 16:25:23.634488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.907 qpair failed and we were unable to recover it. 00:30:47.907 [2024-11-20 16:25:23.634847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.634877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.635255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.635286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.635452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.635479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.635863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.635893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.636227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.636258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.636516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.636545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.636897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.636926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.637299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.637329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.637674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.637704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.638066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.638095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.638479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.638509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.638862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.638891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.639254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.639284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.639674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.639702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.640060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.640088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.640488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.640519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.640869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.640898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.641240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.641270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.641630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.641660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.642009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.642038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.642308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.642338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.642719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.642749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.643081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.643111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.643463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.643494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.643868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.643897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.644281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.644312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.644652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.644680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.645033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.645061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.645422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.645453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.645797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.645827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.646031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.646059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.646452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.646482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.646802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.646832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.647180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.647211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.647594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.647622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.647952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.647982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.648216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.648246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.648624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.908 [2024-11-20 16:25:23.648653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.908 qpair failed and we were unable to recover it. 00:30:47.908 [2024-11-20 16:25:23.649017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.649046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.649403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.649434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.649792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.649820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.650155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.650196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.650531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.650560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.650810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.650839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.651188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.651218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.651596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.651625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.651998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.652026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.652345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.652376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.652719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.652748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.653182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.653213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.653564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.653598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.653915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.653944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.654318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.654348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.654708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.654736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.655117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.655145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.655497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.655526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.655889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.655917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.656170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.656200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.656527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.656556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.656892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.656923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.657243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.657273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.657611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.657650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.658001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.658029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.658280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.658309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.658543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.658575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.658845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.658874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.659046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.659075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.659448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.659478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.659751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.659780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.660134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.660171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.660504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.660533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.909 [2024-11-20 16:25:23.660879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.909 [2024-11-20 16:25:23.660909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.909 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.661266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.661297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.661632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.661662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.662020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.662049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.662403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.662433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.662806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.662835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.663234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.663271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.663642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.663671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.664028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.664057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.664406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.664437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.664722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.664751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.665082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.665112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.665540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.665570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.665869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.665897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.666260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.666291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.666665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.666693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.667106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.667134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.667512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.667541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.667917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.667946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.668299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.668328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.668675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.668706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.669069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.669098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.669355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.669385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.669765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.669794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.670128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.670168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.670521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.670550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.670927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.670956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.671238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.671268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.671650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.671679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.671917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.671945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.672292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.672323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.672680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.672710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.673079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.673107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.673487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.673538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.673901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.673930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.674299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.674330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.674674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.674712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.675086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.675114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.675499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.910 [2024-11-20 16:25:23.675528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.910 qpair failed and we were unable to recover it. 00:30:47.910 [2024-11-20 16:25:23.675907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.675937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.676177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.676214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.676620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.676650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.677009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.677037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.677383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.677413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.677755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.677783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.678116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.678144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.678516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.678545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.678905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.678935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.679294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.679325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.679684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.679714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.680074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.680103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.680461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.680492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.680822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.680851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.681224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.681255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.681635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.681663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.682016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.682045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.682395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.682424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.682783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.682812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.683172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.683202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.683539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.683567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.683995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.684023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.684388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.684419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.684764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.684793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.685148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.685189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.685566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.685595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.685944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.685972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.686310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.686342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.686699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.686729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.687087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.687116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.687364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.687393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.687751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.687780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.688142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.688179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.688535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.688564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.688923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.688952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.689327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.689363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.689709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.689738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.690080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.690108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.911 [2024-11-20 16:25:23.690564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.911 [2024-11-20 16:25:23.690595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.911 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.690936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.690966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.691320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.691350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.691717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.691746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.692120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.692150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.692515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.692545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.692910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.692939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.693320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.693350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.693711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.693739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.694114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.694143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.694517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.694547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.694912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.694941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.695317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.695348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.695729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.695758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.696111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.696141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.696478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.696508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.696753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.696782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.697133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.697187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.697579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.697607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.697962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.697991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.698371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.698401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.698762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.698790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.699192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.699223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.699577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.699605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.699975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.700010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.700307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.700336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.700571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.700603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.700956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.700985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.701326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.701355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.701789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.701817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.702047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.702075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.702462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.702492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.702864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.702891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.703253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.703283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.703534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.703566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.703949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.703977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.704348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.704377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.704726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.704757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.912 qpair failed and we were unable to recover it. 00:30:47.912 [2024-11-20 16:25:23.705107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.912 [2024-11-20 16:25:23.705136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.705504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.705533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.705897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.705927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.706298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.706328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.706695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.706723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.707081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.707109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.707544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.707575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.707926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.707955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.708218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.708249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.708626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.708655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.709019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.709048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.709401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.709432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.709801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.709829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.710181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.710217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.710480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.710510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.710866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.710895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.711255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.711286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.711521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.711551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.711817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.711846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.712201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.712232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.712610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.712638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.712998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.713027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.713401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.713432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.713785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.713813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.714177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.714206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.714570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.714599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.714855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.714884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.715237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.715268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.715597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.715625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.715989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.716019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.716395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.716425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.716784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.716812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.717054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.717082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.717437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.717467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.717832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.717861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.718222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.718252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.718609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.718638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.719005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-11-20 16:25:23.719033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.913 qpair failed and we were unable to recover it. 00:30:47.913 [2024-11-20 16:25:23.719403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.719432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.719781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.719810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.720174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.720205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.720561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.720590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.720835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.720864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.721280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.721312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.721671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.721701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.722060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.722088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.722446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.722476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.722835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.722864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.723114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.723142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.723504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.723534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.723894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.723924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.724271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.724301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.724657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.724685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.724930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.724961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.725331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.725363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.725720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.725749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.726116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.726144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.726520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.726549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.726878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.726907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.727263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.727293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.727648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.727676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.728034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.728062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.728320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.728350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.728724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.728753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.728990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.729020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.729352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.729383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.729735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.729764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.730138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.730174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.730432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.730460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.730824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.730853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.731230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.731260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.731702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.731731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.914 qpair failed and we were unable to recover it. 00:30:47.914 [2024-11-20 16:25:23.732085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-11-20 16:25:23.732115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.732464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.732494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.732869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.732899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.733256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.733288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.733651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.733680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.734032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.734064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.734439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.734469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.734799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.734829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.735197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.735228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.735563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.735598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.735945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.735974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.736319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.736350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.736708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.736739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.737106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.737136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.737387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.737421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.737681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.737710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.738069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.738097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.738472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.738503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.738862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.738891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.739244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.739274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.739642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.739675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.740029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.740057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.740425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.740456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.740819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.740852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.741204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.741236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.741596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.741626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.741992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.742022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.742392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.742423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.742843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.742871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.743204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.743236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.743633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.743663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.744101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.744131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.744531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.744563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.744909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.744939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.745314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.745345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.745779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.745808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.746039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.746073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.746445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.746476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.746840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-11-20 16:25:23.746869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.915 qpair failed and we were unable to recover it. 00:30:47.915 [2024-11-20 16:25:23.747311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.747342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.747703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.747732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.748076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.748105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.748466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.748496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.748853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.748883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.749231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.749262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.749637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.749667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.750017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.750047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.750395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.750426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.750787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.750816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.751057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.751088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.751497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.751528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.751879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.751908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.752285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.752314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.752735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.752765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.752996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.753024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.753412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.753442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.753808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.753837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.754197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.754227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.754580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.754609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.754968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.754996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.755350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.755380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.755739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.755769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.756141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.756182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.756536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.756572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.756907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.756935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.757295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.757327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.757685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.757714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.758075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.758105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.758475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.758506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.758901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.758931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.759183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.759214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.759625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.759655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.759989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.760017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.760444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.760475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.760814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.760844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.761206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.761235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.761494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.761525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.916 qpair failed and we were unable to recover it. 00:30:47.916 [2024-11-20 16:25:23.761891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.916 [2024-11-20 16:25:23.761922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.762274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.762304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.762668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.762696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.763040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.763070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.763410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.763441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.763804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.763833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.764192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.764224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.764587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.764615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.764979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.765008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.765340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.765369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.765703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.765732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.766103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.766133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.766501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.766531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.766776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.766806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.767060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.767090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.767532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.767563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.767892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.767922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.768286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.768318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.768696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.768726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.769086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.769115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.769543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.769583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.769966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.769995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.770377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.770409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.770749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.770778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.771139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.771177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.771539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.771568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.771934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.771965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.772303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.772343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.772702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.772734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.772983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.773012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.773371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.773402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.773814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.773844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.774090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.774122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.774502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.774533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.774898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.774927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.775282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.775311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.775679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.775709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.776065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.776094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.917 [2024-11-20 16:25:23.776436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.917 [2024-11-20 16:25:23.776467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.917 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.776826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.776856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.777269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.777300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.777632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.777662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.778030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.778060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.778307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.778336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.778713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.778743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.779104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.779132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.779579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.779610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.779981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.780010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.780386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.780418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.780680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.780708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.781065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.781094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.781465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.781495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.781865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.781893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.782270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.782301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.782714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.782749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.783090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.783120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.783506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.783536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.783882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.783913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.784179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.784210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.784611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.784640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.785000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.785030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.785394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.785426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.785756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.785787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.786144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.786184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.786524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.786555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.786917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.786945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.787314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.787345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.787511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.787540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.787914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.787943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.788287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.788320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.788667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.788697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.789060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.789089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.918 [2024-11-20 16:25:23.789473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.918 [2024-11-20 16:25:23.789507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.918 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.789840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.789869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.790237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.790270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.790631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.790663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.791028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.791059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.791477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.791508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.791861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.791891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.792241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.792270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.792628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.792657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.793003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.793041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.793413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.793444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.793796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.793826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.794189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.794220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.794582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.794611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.794867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.794899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.795254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.795285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.795662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.795692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.796064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.796092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.796444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.796476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.796829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.796860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.797221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.797254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.797623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.797653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.798014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.798045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.798407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.798438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.798801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.798830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.799170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.799202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.799535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.799564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.799904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.799933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.800280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.800311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.800686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.800715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.801102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.801132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.801550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.801582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.801921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.801956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.802358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.802390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.802772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.802805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.803177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.803209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.803575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.803605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.803866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.803899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.804257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.919 [2024-11-20 16:25:23.804288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.919 qpair failed and we were unable to recover it. 00:30:47.919 [2024-11-20 16:25:23.804667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.804696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.805132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.805173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.805530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.805563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.805947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.805976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.806340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.806371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.806726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.806757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.807111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.807140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.807517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.807546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.807895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.807925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.808277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.808307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.808670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.808700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.809057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.809086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.809454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.809485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.809869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.809899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.810235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.810265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.810628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.810658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.811012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.811044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.811424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.811454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.811797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.811825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.812183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.812213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.812571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.812600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.812967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.812995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.813371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.813401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.813764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.813792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.814143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.814185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.814435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.814464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.814825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.814855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.815224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.815256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.815623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.815652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.816024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.816053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.816420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.816449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.816819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.816848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.817085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.817113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.817507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.817536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.817902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.817930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.818189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.818221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.818597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.818626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.818997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.819027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.920 [2024-11-20 16:25:23.819389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.920 [2024-11-20 16:25:23.819426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.920 qpair failed and we were unable to recover it. 00:30:47.921 [2024-11-20 16:25:23.819768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.921 [2024-11-20 16:25:23.819798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.921 qpair failed and we were unable to recover it. 00:30:47.921 [2024-11-20 16:25:23.820157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.921 [2024-11-20 16:25:23.820200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.921 qpair failed and we were unable to recover it. 00:30:47.921 [2024-11-20 16:25:23.820563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.921 [2024-11-20 16:25:23.820591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.921 qpair failed and we were unable to recover it. 00:30:47.921 [2024-11-20 16:25:23.820951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.921 [2024-11-20 16:25:23.820979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.921 qpair failed and we were unable to recover it. 00:30:47.921 [2024-11-20 16:25:23.821313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.921 [2024-11-20 16:25:23.821344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.921 qpair failed and we were unable to recover it. 00:30:47.921 [2024-11-20 16:25:23.821700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.921 [2024-11-20 16:25:23.821728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:47.921 qpair failed and we were unable to recover it. 00:30:48.194 [2024-11-20 16:25:23.822098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.194 [2024-11-20 16:25:23.822130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.194 qpair failed and we were unable to recover it. 00:30:48.194 [2024-11-20 16:25:23.822506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.194 [2024-11-20 16:25:23.822537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.194 qpair failed and we were unable to recover it. 00:30:48.194 [2024-11-20 16:25:23.822897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.194 [2024-11-20 16:25:23.822926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.194 qpair failed and we were unable to recover it. 00:30:48.194 [2024-11-20 16:25:23.823287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.194 [2024-11-20 16:25:23.823318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.194 qpair failed and we were unable to recover it. 00:30:48.194 [2024-11-20 16:25:23.823730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.194 [2024-11-20 16:25:23.823759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.194 qpair failed and we were unable to recover it. 00:30:48.194 [2024-11-20 16:25:23.824119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.194 [2024-11-20 16:25:23.824148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.194 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.824532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.824563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.824931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.824960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.825337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.825368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.825726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.825755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.826118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.826148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.826538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.826568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.826927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.826956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.827233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.827265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.827553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.827583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.827926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.827954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.828310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.828340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.828733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.828763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.829131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.829177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.829584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.829614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.829966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.830009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.830371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.830402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.830757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.830786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.831193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.831224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.831573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.831603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.831962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.831992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.832349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.832380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.832738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.832768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.833132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.833169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.833478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.833509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.833883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.833915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.834281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.834312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.834561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.834594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.835001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.835031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.835378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.835411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.835799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.835832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.836199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.836231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.836584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.836614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.836985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.837016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.837384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.837415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.837773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.837802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.838172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.838203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.838557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.195 [2024-11-20 16:25:23.838587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.195 qpair failed and we were unable to recover it. 00:30:48.195 [2024-11-20 16:25:23.838933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.838963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.839211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.839243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.839624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.839655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.840016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.840045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.840416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.840453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.840856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.840886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.841300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.841331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.841712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.841742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.842148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.842193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.842449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.842478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.842849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.842880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.843232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.843264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.843630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.843668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.844019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.844049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.844384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.844416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.844783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.844813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.845178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.845209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.845465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.845498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.845904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.845935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.846295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.846327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.846731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.846760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.847109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.847143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.847553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.847585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.847937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.847968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.848328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.848361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.848746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.848776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.849126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.849156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.849537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.849568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.849922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.849952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.850314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.850346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.850596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.850625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.850985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.851015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.851355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.851387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.851548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.851579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.851945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.851976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.852341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.852373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.852729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.852758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.853128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.196 [2024-11-20 16:25:23.853171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.196 qpair failed and we were unable to recover it. 00:30:48.196 [2024-11-20 16:25:23.853534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.853564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.853934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.853963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.854327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.854357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.854715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.854745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.854981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.855014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.855394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.855425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.855755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.855786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.856149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.856201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.856567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.856597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.856953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.856982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.857339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.857371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.857623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.857656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.858034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.858063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.858414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.858445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.858800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.858830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.859186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.859217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.859580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.859609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.859976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.860005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.860354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.860384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.860621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.860650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.860979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.861008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.861436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.861468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.861828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.861859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.862216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.862248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.862636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.862666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.863034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.863064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.863327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.863361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.863742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.863882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.864282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.864313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.864690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.864719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.865084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.865114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.865463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.865493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.865824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.865853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.866238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.866269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.866606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.866642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.867010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.867039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.867376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.867407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.867796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.867825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.197 qpair failed and we were unable to recover it. 00:30:48.197 [2024-11-20 16:25:23.868199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.197 [2024-11-20 16:25:23.868229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.868502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.868531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.868906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.868936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.869283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.869313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.869689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.869718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.870066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.870096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.870317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.870349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.870580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.870609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.870983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.871018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.871354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.871385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.871749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.871779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.872079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.872109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.872368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.872400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.872757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.872787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.873145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.873186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.873615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.873645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.874010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.874039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.874432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.874462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.874827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.874856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.875255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.875288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.875693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.875724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.876078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.876107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.876452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.876483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.876836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.876875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.877260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.877292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.877654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.877684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.878065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.878095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.878471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.878502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.878872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.878907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.879231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.879261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.879535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.879564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.879944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.879974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.880226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.880258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.880492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.880527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.880892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.880921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.881290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.881322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.881556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.881587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.881945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.881975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.882388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.198 [2024-11-20 16:25:23.882419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.198 qpair failed and we were unable to recover it. 00:30:48.198 [2024-11-20 16:25:23.882786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.882815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.883186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.883217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.883593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.883624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.883987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.884017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.884393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.884423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.884783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.884812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.885038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.885067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.885453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.885484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.885844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.885875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.886233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.886265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.886656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.886685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.886908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.886938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.887301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.887332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.887693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.887723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.887979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.888009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.888255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.888290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.888656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.888687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.889054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.889084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.889442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.889473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.889826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.889857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.890235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.890268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.890500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.890529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.890853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.890883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.891239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.891270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.891662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.891694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.892051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.892083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.892446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.892477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.892830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.892859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.893223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.893254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.893600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.893630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.893992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.894022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.894329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.199 [2024-11-20 16:25:23.894361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.199 qpair failed and we were unable to recover it. 00:30:48.199 [2024-11-20 16:25:23.894722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.894753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.895110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.895141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.895538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.895569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.895901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.895931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.896290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.896325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.896689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.896719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.897084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.897117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.897532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.897563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.897834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.897862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.898233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.898265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.898691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.898720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.899085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.899115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.899494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.899525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.899887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.899915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.900120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.900148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.900311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.900341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.900727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.900756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.901133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.901190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.901463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.901492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.901862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.901891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.902248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.902285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.902675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.902704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.902954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.902983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.903273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.903302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.903654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.903684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.904045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.904074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.904349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.904379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.904717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.904747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.904994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.905023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.905311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.905342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.905735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.905765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.906127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.906170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.906557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.906586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.906987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.907018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.907389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.907420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.907790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.907819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.908191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.908222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.908633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.908662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.200 qpair failed and we were unable to recover it. 00:30:48.200 [2024-11-20 16:25:23.909039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.200 [2024-11-20 16:25:23.909067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.909412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.909442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.909804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.909833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.910207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.910237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.910624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.910654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.911005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.911035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.911381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.911412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.911774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.911803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.912184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.912214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.912552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.912587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.912826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.912857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.913244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.913275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.913664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.913693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.914115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.914143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.914521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.914550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.914917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.914947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.915226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.915256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.915665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.915694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.916051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.916080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.916435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.916464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.916845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.916873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.917243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.917274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.917645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.917676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.918043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.918073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.918456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.918486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.918839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.918868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.919234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.919264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.919642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.919672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.920038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.920067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.920457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.920487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.920823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.920852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.921219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.921249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.921616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.921644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.922011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.922039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.922390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.922420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.922753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.922782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.923180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.923211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.923570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.923599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.924008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.201 [2024-11-20 16:25:23.924036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.201 qpair failed and we were unable to recover it. 00:30:48.201 [2024-11-20 16:25:23.924414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.924445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.924803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.924832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.925199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.925231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.925598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.925627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.925998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.926026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.926405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.926436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.926782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.926811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.927178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.927209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.927575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.927604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.927988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.928015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.928369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.928400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.928763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.928794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.929039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.929067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.929419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.929450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.929807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.929836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.930200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.930230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.930621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.930649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.931016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.931044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.931500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.931531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.931896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.931926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.932276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.932306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.932688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.932717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.932983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.933011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.933367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.933398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.933732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.933762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.934157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.934199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.934558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.934587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.934948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.934976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.935315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.935345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.935720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.935750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.935997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.936031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.936371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.936401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.936776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.936805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.937067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.937097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.937439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.937469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.937829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.937859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.938238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.938269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.938518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.938549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.202 qpair failed and we were unable to recover it. 00:30:48.202 [2024-11-20 16:25:23.938929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.202 [2024-11-20 16:25:23.938964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.939201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.939235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.939625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.939654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.940028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.940057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.940488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.940519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.940849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.940878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.941237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.941267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.941614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.941642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.942000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.942029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.942365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.942395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.942733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.942761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.943123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.943152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.943516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.943546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.943924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.943953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.944335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.944366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.944693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.944723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.945083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.945113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.945520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.945551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.945791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.945820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.946184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.946215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.946612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.946641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.947023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.947052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.947404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.947434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.947665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.947696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.948055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.948084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.948409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.948440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.948801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.948831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.949200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.949238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.949641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.949670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.950042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.950070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.950409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.950440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.950827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.950856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.951218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.951248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.951580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.951609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.951954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.951982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.952327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.952359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.952718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.203 [2024-11-20 16:25:23.952746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.203 qpair failed and we were unable to recover it. 00:30:48.203 [2024-11-20 16:25:23.953120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.953149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.953533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.953563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.953923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.953954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.954314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.954346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.954711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.954740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.955105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.955133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.955519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.955548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.955905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.955934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.956375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.956406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.956767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.956795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.957138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.957178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.957529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.957558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.957930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.957958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.958323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.958354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.958719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.958749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.959102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.959132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.959497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.959526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.959872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.959910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.960296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.960326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.960723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.960753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.961111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.961141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.961552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.961581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.961921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.961950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.962321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.962351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.962711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.962739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.963099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.963128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.963389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.963419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.963747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.963778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.964140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.964183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.964579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.964607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.964967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.964995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.965270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.965302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.965713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.965741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.966091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.966121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.966491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.966521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.204 qpair failed and we were unable to recover it. 00:30:48.204 [2024-11-20 16:25:23.966745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.204 [2024-11-20 16:25:23.966776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.967130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.967170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.967526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.967554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.967912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.967941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.968305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.968335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.968710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.968738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.969094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.969124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.969503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.969533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.969906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.969936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.970306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.970338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.970728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.970759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.971112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.971142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.971409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.971438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.971698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.971730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.972111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.972141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.972512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.972540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.972877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.972905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.973264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.973295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.973671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.973699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.974025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.974061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.974401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.974432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.974794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.974823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.975193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.975223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.975572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.975601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.975969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.975998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.976383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.976413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.976615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.976648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.976905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.976933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.977270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.977306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.977671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.977700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.978062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.978090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.978439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.978471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.978809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.978839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.979271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.979302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.979653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.979683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.980051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.980080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.980409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.980440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.980832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.980862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.205 qpair failed and we were unable to recover it. 00:30:48.205 [2024-11-20 16:25:23.981220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.205 [2024-11-20 16:25:23.981250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.981632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.981661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.982025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.982054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.982399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.982430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.982623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.982652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.983026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.983057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.983388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.983418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.983783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.983812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.984188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.984219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.984571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.984600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.984968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.984996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.985350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.985379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.985753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.985794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.986204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.986236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.986567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.986597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.986959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.986988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.987330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.987361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.987720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.987750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.988109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.988137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.988437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.988467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.988814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.988842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.989207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.989237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.989634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.989662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.990033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.990063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.990482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.990513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.990848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.990877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.991231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.991262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.991689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.991719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.992084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.992115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.992486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.992518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.992933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.992962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.993330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.993360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.993714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.993743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.994112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.994141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.994495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.994526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.994782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.994811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.995058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.995086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.995443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.995475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.995740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.206 [2024-11-20 16:25:23.995769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.206 qpair failed and we were unable to recover it. 00:30:48.206 [2024-11-20 16:25:23.996115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:23.996150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:23.996523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:23.996552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:23.996917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:23.996946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:23.997278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:23.997308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:23.997677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:23.997707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:23.998059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:23.998088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:23.998555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:23.998584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:23.998943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:23.998972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:23.999333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:23.999364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:23.999724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:23.999753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.000116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.000145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.000540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.000571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.000950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.000979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.001214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.001247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.001620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.001650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.002012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.002041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.002406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.002438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.002802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.002832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.003192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.003223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.003568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.003598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.003959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.003988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.004355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.004385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.004812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.004841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.005091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.005120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.005459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.005490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.005827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.005857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.006227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.006258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.006665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.006694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.007070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.007100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.007461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.007492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.007831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.007861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.008229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.008260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.008644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.008672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.009039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.009068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.009454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.009484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.009851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.009879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.010231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.010261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.010627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.207 [2024-11-20 16:25:24.010656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.207 qpair failed and we were unable to recover it. 00:30:48.207 [2024-11-20 16:25:24.011007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.011036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.011409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.011440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.011805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.011834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.012187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.012219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.012592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.012622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.012980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.013010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.013352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.013383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.013739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.013768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.014116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.014147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.014516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.014545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.014904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.014934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.015292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.015322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.015673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.015701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.016062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.016091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.016450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.016480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.016861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.016890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.017241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.017271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.017616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.017645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.018015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.018044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.018337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.018367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.018719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.018747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.019105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.019134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.019505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.019535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.019907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.019936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.020294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.020323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.020679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.020708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.021076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.021105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.021465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.021495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.021859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.021890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.022230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.022260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.022625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.022661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.023017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.023047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.023485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.023515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.208 qpair failed and we were unable to recover it. 00:30:48.208 [2024-11-20 16:25:24.023860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.208 [2024-11-20 16:25:24.023890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.024253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.024283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.024655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.024684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.025040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.025070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.025459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.025488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.025769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.025797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.026182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.026214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.026516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.026547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.026912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.026941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.027304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.027334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.027698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.027727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.028104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.028134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.028492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.028524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.028875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.028905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.029304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.029334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.029691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.029720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.030097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.030125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.030480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.030510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.030851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.030880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.031257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.031286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.031631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.031660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.032022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.032051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.032416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.032447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.032813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.032842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.033191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.033230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.033564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.033594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.033943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.033971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.034333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.034363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.034707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.034737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.035099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.035128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.035546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.035576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.035940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.035968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.036330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.036361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.036599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.036627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.036997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.037027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.037399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.037430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.037802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.037830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.038191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.038221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.209 [2024-11-20 16:25:24.038568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.209 [2024-11-20 16:25:24.038598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.209 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.038832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.038860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.039226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.039256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.039520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.039549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.039877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.039905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.040262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.040293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.040699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.040728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.041081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.041110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.041552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.041583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.041942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.041972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.042340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.042371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.042626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.042655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.042841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.042871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.043248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.043284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.043621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.043651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.044012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.044041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.044394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.044425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.044786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.044816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.045182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.045212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.045571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.045600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.045967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.045998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.046341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.046371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.046729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.046758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.047122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.047150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.047525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.047554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.047924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.047953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.048300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.048330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.048697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.048727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.049098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.049127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.049535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.049566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.049898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.049927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.050289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.050320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.050690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.050720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.051092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.051121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.051488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.051518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.051875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.051904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.052241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.052271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.052644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.052674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.053035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.210 [2024-11-20 16:25:24.053064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.210 qpair failed and we were unable to recover it. 00:30:48.210 [2024-11-20 16:25:24.053417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.053447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.053692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.053721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.054084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.054113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.054483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.054513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.054877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.054908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.055280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.055311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.055666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.055696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.056040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.056068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.056463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.056493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.056836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.056865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.057194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.057239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.057644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.057674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.058031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.058060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.058406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.058437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.058802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.058831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.059236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.059267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.059628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.059659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.059996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.060026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.060396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.060426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.060788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.060816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.061230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.061261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.061495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.061527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.061871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.061901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.062263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.062294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.062661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.062689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.063039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.063068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.063413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.063444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.063806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.063834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.064284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.064314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.064573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.064602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.064966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.064994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.065364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.065396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.065776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.065804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.066180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.066212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.066566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.066596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.066952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.066981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.067246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.067276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.067631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.067659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.211 qpair failed and we were unable to recover it. 00:30:48.211 [2024-11-20 16:25:24.068020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.211 [2024-11-20 16:25:24.068049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.068306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.068338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.068692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.068722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.069094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.069124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.069475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.069513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.069847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.069876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.070237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.070267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.070646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.070682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.071020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.071049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.071385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.071416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.071777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.071805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.072184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.072215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.072450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.072479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.072728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.072757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.073180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.073210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.073624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.073653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.074027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.074057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.074393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.074423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.074787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.074817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.075188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.075218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.075381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.075409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.075806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.075836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.076209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.076240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.076610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.076640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.076983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.077012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.077352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.077382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.077823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.077854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.078207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.078238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.078503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.078534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.078901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.078930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.079291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.079323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.079701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.079738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.080092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.080122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.080493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.080524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.080772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.080801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.081196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.081228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.081579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.081608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.081958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.081988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.212 [2024-11-20 16:25:24.082243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.212 [2024-11-20 16:25:24.082274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.212 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.082569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.082597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.082979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.083008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.083367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.083398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.083620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.083649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.083925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.083955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.084338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.084370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.084771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.084802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.085154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.085195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.085556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.085586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.085960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.085989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.086229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.086260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.086628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.086659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.087023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.087053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.087394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.087426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.087780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.087810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.088183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.088214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.088440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.088470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.088854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.088883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.089235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.089266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.089628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.089658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.090012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.090042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.090420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.090451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.090812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.090844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.091200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.091232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.091597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.091627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.092019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.092048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.092414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.092444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.092817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.092846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.093213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.093243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.093471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.093504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.093858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.093888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.094256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.094287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.094644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.094674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.095014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.213 [2024-11-20 16:25:24.095045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.213 qpair failed and we were unable to recover it. 00:30:48.213 [2024-11-20 16:25:24.095427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.095457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.095814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.095845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.096222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.096252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.096506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.096535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.096901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.096930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.097280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.097310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.097649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.097677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.098022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.098052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.098393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.098425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.098678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.098707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.098935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.098964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.099356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.099386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.099745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.099774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.100145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.100208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.100618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.100647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.101015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.101045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.101414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.101444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.101805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.101835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.102197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.102228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.102643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.102673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.103073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.103102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.103355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.103386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.103769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.103799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.104184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.104215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.104472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.104501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.104888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.104918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.105269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.105308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.105730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.105760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.106092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.106121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.106463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.106493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.106858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.106889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.107264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.107295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.107653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.107683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.108043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.108071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.108327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.108357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.108738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.108768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.109124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.109154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.109408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.214 [2024-11-20 16:25:24.109438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.214 qpair failed and we were unable to recover it. 00:30:48.214 [2024-11-20 16:25:24.109839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.215 [2024-11-20 16:25:24.109870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.215 qpair failed and we were unable to recover it. 00:30:48.215 [2024-11-20 16:25:24.110121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.215 [2024-11-20 16:25:24.110150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.215 qpair failed and we were unable to recover it. 00:30:48.215 [2024-11-20 16:25:24.110470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.215 [2024-11-20 16:25:24.110501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.215 qpair failed and we were unable to recover it. 00:30:48.215 [2024-11-20 16:25:24.110878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.215 [2024-11-20 16:25:24.110907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.215 qpair failed and we were unable to recover it. 00:30:48.215 [2024-11-20 16:25:24.111155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.215 [2024-11-20 16:25:24.111197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.215 qpair failed and we were unable to recover it. 00:30:48.215 [2024-11-20 16:25:24.111628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.215 [2024-11-20 16:25:24.111659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.215 qpair failed and we were unable to recover it. 00:30:48.215 [2024-11-20 16:25:24.111988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.215 [2024-11-20 16:25:24.112017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.215 qpair failed and we were unable to recover it. 00:30:48.215 [2024-11-20 16:25:24.112255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.215 [2024-11-20 16:25:24.112286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.215 qpair failed and we were unable to recover it. 00:30:48.215 [2024-11-20 16:25:24.112636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.215 [2024-11-20 16:25:24.112665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.215 qpair failed and we were unable to recover it. 00:30:48.215 [2024-11-20 16:25:24.113020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.215 [2024-11-20 16:25:24.113050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.215 qpair failed and we were unable to recover it. 00:30:48.215 [2024-11-20 16:25:24.113416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.215 [2024-11-20 16:25:24.113448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.215 qpair failed and we were unable to recover it. 00:30:48.215 [2024-11-20 16:25:24.113812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.215 [2024-11-20 16:25:24.113843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.215 qpair failed and we were unable to recover it. 00:30:48.215 [2024-11-20 16:25:24.114216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.215 [2024-11-20 16:25:24.114246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.215 qpair failed and we were unable to recover it. 00:30:48.215 [2024-11-20 16:25:24.114615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.215 [2024-11-20 16:25:24.114645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.215 qpair failed and we were unable to recover it. 00:30:48.215 [2024-11-20 16:25:24.115003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.215 [2024-11-20 16:25:24.115031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.215 qpair failed and we were unable to recover it. 00:30:48.215 [2024-11-20 16:25:24.115270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.215 [2024-11-20 16:25:24.115306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.215 qpair failed and we were unable to recover it. 00:30:48.215 [2024-11-20 16:25:24.115766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.215 [2024-11-20 16:25:24.115796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.215 qpair failed and we were unable to recover it. 00:30:48.215 [2024-11-20 16:25:24.116175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.215 [2024-11-20 16:25:24.116206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.215 qpair failed and we were unable to recover it. 00:30:48.487 [2024-11-20 16:25:24.116575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.487 [2024-11-20 16:25:24.116607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.487 qpair failed and we were unable to recover it. 00:30:48.487 [2024-11-20 16:25:24.116954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.487 [2024-11-20 16:25:24.116983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.487 qpair failed and we were unable to recover it. 00:30:48.487 [2024-11-20 16:25:24.117338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.487 [2024-11-20 16:25:24.117369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.487 qpair failed and we were unable to recover it. 00:30:48.487 [2024-11-20 16:25:24.117734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.487 [2024-11-20 16:25:24.117764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.487 qpair failed and we were unable to recover it. 00:30:48.487 [2024-11-20 16:25:24.118143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.487 [2024-11-20 16:25:24.118186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.487 qpair failed and we were unable to recover it. 00:30:48.487 [2024-11-20 16:25:24.118538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.118568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.118937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.118967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.119337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.119369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.119745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.119773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.120141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.120182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.120550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.120581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.120937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.120966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.121328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.121360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.121712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.121742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.122116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.122144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.122485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.122514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.122865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.122896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.123120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.123150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.123416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.123446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.123812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.123841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.124212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.124242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.124603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.124632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.124998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.125028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.125406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.125436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.125807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.125843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.126199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.126230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.126494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.126523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.126928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.126958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.127318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.127348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.127707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.127738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.128093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.128122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.128500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.128530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.128898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.128927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.129295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.129325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.129745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.129775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.130133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.130174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.130527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.130557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.130914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.130943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.131316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.131347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.131723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.131752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.132007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.132037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.132334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.132365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.132710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.488 [2024-11-20 16:25:24.132739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.488 qpair failed and we were unable to recover it. 00:30:48.488 [2024-11-20 16:25:24.133092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.133121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.133382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.133416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.133807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.133837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.134218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.134249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.134664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.134695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.135044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.135074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.135450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.135481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.135851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.135881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.136136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.136178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.136487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.136518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.136864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.136893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.137260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.137290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.137555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.137584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.137926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.137955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.138337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.138368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.138779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.138810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.139172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.139204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.139562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.139592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.139954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.139983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.140341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.140373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.140737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.140767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.141135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.141176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.141554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.141584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.141943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.141972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.142342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.142373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.142733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.142762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.142989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.143021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.143348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.143380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.143739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.143768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.144132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.144172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.144527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.144557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.144919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.144949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.145205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.145236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.145598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.145628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.146058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.146087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.146446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.146476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.146853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.146883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.147224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.147255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.489 qpair failed and we were unable to recover it. 00:30:48.489 [2024-11-20 16:25:24.147629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.489 [2024-11-20 16:25:24.147658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.148017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.148046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.148426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.148458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.148809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.148837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.149181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.149211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.149573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.149602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.149905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.149935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.150278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.150308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.150715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.150743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.151072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.151101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.151445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.151474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.151839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.151874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.152283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.152314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.152686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.152715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.153088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.153117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.153510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.153540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.153897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.153926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.154288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.154318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.154708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.154737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.155099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.155128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.155472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.155501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.155864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.155893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.156241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.156271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.156647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.156677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.157046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.157076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.157466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.157497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.157835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.157863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.158230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.158260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.158664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.158692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.159053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.159082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.159455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.159485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.159853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.159882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.160240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.160270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.160636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.160665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.161018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.161047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.161393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.161423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.161786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.161814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.162064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.490 [2024-11-20 16:25:24.162096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.490 qpair failed and we were unable to recover it. 00:30:48.490 [2024-11-20 16:25:24.162446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.162483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.162912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.162942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.163302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.163333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.163711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.163740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.164106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.164134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.164501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.164531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.164880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.164909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.165282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.165312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.165689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.165718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.166079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.166109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.166478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.166509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.166874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.166903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.167239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.167270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.167694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.167723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.168050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.168079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.168415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.168445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.168802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.168831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.169072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.169100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.169361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.169394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.169758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.169787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.170179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.170210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.170542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.170573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.170931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.170960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.171214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.171248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.171608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.171637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.171995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.172024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.172368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.172399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.172751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.172781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.173144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.173187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.173527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.173555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.173908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.173937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.174320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.174350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.174711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.174741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.175098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.175127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.175479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.175509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.175875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.175904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.176265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.176295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.176556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.176585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.491 qpair failed and we were unable to recover it. 00:30:48.491 [2024-11-20 16:25:24.176964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.491 [2024-11-20 16:25:24.176993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.177350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.177380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.177625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.177654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.177966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.177996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.178408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.178438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.178797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.178826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.179187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.179218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.179585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.179614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.179980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.180009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.180351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.180381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.180743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.180771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.181128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.181157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.181525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.181556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.181918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.181947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.182310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.182339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.182669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.182697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.183071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.183100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.183470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.183503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.183860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.183890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.184251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.184289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.184684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.184736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.185123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.185204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.185609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.185663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.185979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.186028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.186422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.186464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.186843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.186874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.187233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.187266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.187631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.187660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.188019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.188047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.188393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.188424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.188836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.188876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.189280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.492 [2024-11-20 16:25:24.189312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.492 qpair failed and we were unable to recover it. 00:30:48.492 [2024-11-20 16:25:24.189628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.189657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.189999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.190028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.190430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.190460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.190830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.190861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.191221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.191252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.191651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.191680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.192041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.192071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.192408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.192439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.192782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.192812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.193207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.193239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.193688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.193718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.194064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.194093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.194476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.194508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.194861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.194891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.195140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.195191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.195428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.195458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.195796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.195826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.196205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.196237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.196482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.196510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.196898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.196927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.197290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.197322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.197702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.197731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.198098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.198128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.198514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.198545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.198899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.198929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.199294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.199331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.199696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.199725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.199977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.200008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.200362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.200393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.200750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.200780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.201134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.201230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.201641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.201670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.202029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.202058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.202402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.202431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.202769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.202800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.203155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.203197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.203551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.203580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.203877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.203905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.493 [2024-11-20 16:25:24.204288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.493 [2024-11-20 16:25:24.204319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.493 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.204675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.204704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.205066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.205095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.205455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.205486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.205842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.205871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.206171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.206201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.206557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.206586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.207035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.207065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.207402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.207433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.207831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.207859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.208221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.208253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.208628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.208658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.209017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.209046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.209421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.209451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.209815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.209851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.210237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.210269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.210618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.210646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.210995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.211027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.211389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.211418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.211777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.211805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.212178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.212211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.212593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.212623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.212964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.212995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.213361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.213392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.213756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.213788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.214152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.214195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.214561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.214593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.214953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.214981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.215344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.215374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.215618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.215650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.216037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.216065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.216426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.216458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.216825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.216854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.217198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.217230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.217633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.217662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.218019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.218050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.218308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.218340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.218701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.494 [2024-11-20 16:25:24.218731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.494 qpair failed and we were unable to recover it. 00:30:48.494 [2024-11-20 16:25:24.219097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.219127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.219499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.219528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.219907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.219937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.220296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.220326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.220699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.220728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.221085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.221114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.221397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.221430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.221778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.221809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.222186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.222217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.222458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.222485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.222846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.222874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.223243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.223274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.223650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.223678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.224044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.224073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.224452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.224483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.224846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.224874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.225252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.225282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.225620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.225651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.226007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.226035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.226280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.226313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.226599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.226627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.227051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.227080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.227448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.227478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.227842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.227871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.228238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.228268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.228622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.228652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.229022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.229051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.229394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.229426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.229786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.229816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.230154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.230196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.230527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.230557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.230945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.230975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.231219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.231251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.231507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.231538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.231906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.231936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.232279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.232310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.232681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.232710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.233071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.495 [2024-11-20 16:25:24.233100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.495 qpair failed and we were unable to recover it. 00:30:48.495 [2024-11-20 16:25:24.233468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.233499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.233846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.233876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.234132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.234172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.234540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.234569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.234940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.234969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.235329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.235359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.235737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.235773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.236129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.236168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.236575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.236605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.236967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.236997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.237345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.237375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.237750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.237779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.238130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.238182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.238595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.238625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.238977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.239005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.239345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.239375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.239737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.239766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.240122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.240150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.240497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.240526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.240965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.240995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.241344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.241375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.241749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.241778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.242124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.242153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.242510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.242540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.242901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.242929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.243300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.243332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.243689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.243718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.244077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.244106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.244392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.244423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.244792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.244822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.245249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.245280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.245647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.245677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.246031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.246060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.246405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.246442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.246715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.246743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.247098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.247128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.247527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.247558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.496 qpair failed and we were unable to recover it. 00:30:48.496 [2024-11-20 16:25:24.247908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.496 [2024-11-20 16:25:24.247939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.248292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.248322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.248695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.248723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.249170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.249201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.249554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.249584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.249951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.249981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.250345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.250375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.250744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.250772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.251154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.251198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.251545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.251573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.251940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.251970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.252322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.252353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.252713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.252742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.253101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.253131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.253507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.253537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.253910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.253940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.254305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.254336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.254708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.254737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.255100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.255130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.255509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.255538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.255907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.255937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.256299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.256334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.256568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.256599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.256948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.256977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.257319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.257349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.257706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.257735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.258103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.258132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.258554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.258584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.258944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.258973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.259348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.259380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.259746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.259776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.260022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.260054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.260392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.497 [2024-11-20 16:25:24.260423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.497 qpair failed and we were unable to recover it. 00:30:48.497 [2024-11-20 16:25:24.260783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.260813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.261178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.261209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.261571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.261600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.261963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.261992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.262372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.262403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.262638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.262670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.263044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.263074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.263425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.263457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.263813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.263842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.264214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.264245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.264604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.264633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.264969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.264999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.265343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.265374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.265636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.265664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.266005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.266035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.266376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.266407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.266766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.266795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.267168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.267198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.267550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.267581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.267951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.267980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.268342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.268372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.268745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.268774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.269137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.269176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.269522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.269550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.269915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.269945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.270280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.270310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.270689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.270718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.271079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.271108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.271460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.271491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.271867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.271897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.272245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.272277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.272645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.272680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.273016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.273046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.273391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.273421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.273787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.273815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.274178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.274208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.274611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.274640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.498 [2024-11-20 16:25:24.275000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.498 [2024-11-20 16:25:24.275029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.498 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.275472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.275503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.275857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.275886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.276247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.276277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.276635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.276666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.277028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.277056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.277399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.277430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.277797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.277826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.278190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.278221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.278474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.278502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.278876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.278905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.279191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.279223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.279578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.279607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.279982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.280010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.280260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.280293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.280642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.280671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.281035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.281065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.281432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.281462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.281810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.281840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.282197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.282228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.282585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.282614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.282983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.283021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.283347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.283377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.283720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.283750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.284190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.284221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.284580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.284608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.284970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.284998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.285373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.285403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.285765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.285796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.286134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.286174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.286530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.286559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.286918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.286947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.287319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.287349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.287727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.287756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.288189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.288221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.288584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.288613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.289046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.289075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.289403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.499 [2024-11-20 16:25:24.289435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.499 qpair failed and we were unable to recover it. 00:30:48.499 [2024-11-20 16:25:24.289791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.289819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.290156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.290198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.290558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.290588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.290946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.290975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.291347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.291376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.291751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.291782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.292129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.292173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.292501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.292530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.292914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.292944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.293303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.293335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.293677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.293716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.294073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.294103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.294484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.294516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.294855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.294884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.295242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.295273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.295646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.295675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.296035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.296064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.296411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.296440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.296797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.296826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.297205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.297237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.297599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.297629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.297972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.298003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.298333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.298364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.298690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.298719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.299080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.299110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.299525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.299555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.299900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.299929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.300299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.300330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.300672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.300703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.300962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.300991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.301367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.301398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.301768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.301797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.302189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.302221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.302612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.302641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.302996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.303026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.303312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.303342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.303717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.500 [2024-11-20 16:25:24.303746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.500 qpair failed and we were unable to recover it. 00:30:48.500 [2024-11-20 16:25:24.304087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.304117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.304543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.304576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.304864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.304894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.305248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.305280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.305633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.305663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.306026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.306056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.306403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.306434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.306796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.306824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.307183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.307214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.307482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.307511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.307887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.307917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.308071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.308103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.308506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.308536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.308869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.308899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.309180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.309214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.309591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.309621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.309991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.310020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.310387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.310419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.310797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.310827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.311185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.311219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.311593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.311623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1472965 Killed "${NVMF_APP[@]}" "$@" 00:30:48.501 [2024-11-20 16:25:24.311899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.311931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.312305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.312336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 16:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:48.501 [2024-11-20 16:25:24.312704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.312733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 16:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:48.501 [2024-11-20 16:25:24.313106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.313135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 16:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:48.501 [2024-11-20 16:25:24.313493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.313524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 wit 16:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:48.501 h addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 16:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:48.501 [2024-11-20 16:25:24.313884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.313915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.314283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.314313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.314666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.314697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.315058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.315088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.501 qpair failed and we were unable to recover it. 00:30:48.501 [2024-11-20 16:25:24.315547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.501 [2024-11-20 16:25:24.315578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.315947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.315977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.316370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.316401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.316761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.316790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.317151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.317193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.317498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.317528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.317889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.317919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.318289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.318321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.318673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.318702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.318965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.318994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.319241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.319273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.319639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.319670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.320053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.320083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.320430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.320461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.320823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.320854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.321228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.321259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.321621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.321653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.321954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.321984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 16:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1473885 00:30:48.502 [2024-11-20 16:25:24.322336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.322367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 16:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1473885 00:30:48.502 [2024-11-20 16:25:24.322722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.322750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 16:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1473885 ']' 00:30:48.502 16:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:48.502 [2024-11-20 16:25:24.323096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.323126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 16:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.502 16:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:48.502 [2024-11-20 16:25:24.323506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.323537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 16:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.502 [2024-11-20 16:25:24.323841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.323870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 16:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:48.502 [2024-11-20 16:25:24.324116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 16:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:48.502 [2024-11-20 16:25:24.324146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.324433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.324464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.324821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.324851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.325170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.325201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.325606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.325636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.326000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.326030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.326389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.326420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.326779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.326809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.327262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.327296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.327634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.327674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.502 qpair failed and we were unable to recover it. 00:30:48.502 [2024-11-20 16:25:24.328102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.502 [2024-11-20 16:25:24.328136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.328421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.328454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.328827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.328862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.329240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.329271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.329528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.329558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.329792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.329826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.330183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.330215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.330597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.330628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.330999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.331032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.331255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.331286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.331663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.331693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.332045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.332083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.332508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.332541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.332747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.332777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.333062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.333093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.333464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.333496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.333856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.333888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.334132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.334184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.334560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.334592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.334954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.334984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.335335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.335367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.335745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.335776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.336018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.336048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.336408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.336439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.336816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.336847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.337226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.337258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.337648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.337680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.338035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.338067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.338241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.338271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.338525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.338559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.338921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.338951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.339244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.339274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.339652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.339682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.339912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.339945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.340275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.340306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.340691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.340721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.341086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.341115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.341485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.341517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.503 [2024-11-20 16:25:24.341766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.503 [2024-11-20 16:25:24.341795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.503 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.342048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.342078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.342468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.342500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.342757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.342786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.343150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.343193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.343575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.343606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.343838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.343868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.344231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.344263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.344636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.344667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.344897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.344926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.345180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.345211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.345585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.345615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.345957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.345989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.346197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.346228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.346668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.346699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.347080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.347111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.347478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.347510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.347881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.347910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.348245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.348278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.348672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.348702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.349080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.349111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.349520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.349552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.349974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.350004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.350257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.350289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.350654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.350683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.351107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.351136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.351554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.351585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.351824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.351853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.352244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.352276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.352577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.352607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.352849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.352881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.353245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.353276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.353663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.353695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.354071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.354101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.354531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.354563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.354822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.354851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.355219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.355250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.355618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.355647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.504 [2024-11-20 16:25:24.356041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.504 [2024-11-20 16:25:24.356071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.504 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.356409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.356441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.356812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.356841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.357206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.357243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.357624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.357654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.358038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.358068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.358460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.358491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.358889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.358919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.359285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.359316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.359691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.359721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.360103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.360132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.360503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.360533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.360676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.360706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.361099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.361129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.361524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.361556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.361786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.361815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.362186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.362219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.362465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.362494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.362760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.362790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.363175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.363208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.363470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.363499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.363879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.363908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.364295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.364327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.364762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.364791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.365013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.365042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.365397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.365429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.365824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.365855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.366240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.366272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.366669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.366698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.366937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.366966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.367223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.367265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.367638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.367668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.368030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.368060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.368406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.368437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.368806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.368836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.369249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.369280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.369655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.369684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.370070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.370102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.505 [2024-11-20 16:25:24.370385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.505 [2024-11-20 16:25:24.370417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.505 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.370805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.370835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.371203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.371235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.371640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.371670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.372105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.372135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.372429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.372463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.372835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.372866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.373287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.373318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.373573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.373606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.373977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.374008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.374433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.374465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.374852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.374882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.375258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.375290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.375682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.375712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.376101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.376131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.376500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.376531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.376759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.376789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.377175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.377206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.377600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.377629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.378013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.378042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.378430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.378461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.378724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.378754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.378980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.379010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.379269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.379300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.379670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.379699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.380068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.380098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.380496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.380527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.380916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.380947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.381247] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:30:48.506 [2024-11-20 16:25:24.381306] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:48.506 [2024-11-20 16:25:24.381372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.381401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.381774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.381802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.382235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.382265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.382527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.382557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.382938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.506 [2024-11-20 16:25:24.382969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.506 qpair failed and we were unable to recover it. 00:30:48.506 [2024-11-20 16:25:24.383343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.383376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.383746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.383776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.384026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.384055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.384304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.384335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.384455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.384487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.384823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.384918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.385397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.385437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.385879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.385911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.386265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.386299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.386568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.386599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.386971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.387000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.387264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.387297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.387566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.387614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.387890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.387920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.388285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.388320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.388673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.388704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.389076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.389107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.389576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.389683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.390139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.390196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.390587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.390619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.391051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.391082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.391391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.391424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.391783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.391813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.392183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.392215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.392617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.392647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.393026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.393056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.393432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.393464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.393826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.393857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.394198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.394229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.394566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.394596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.394964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.394994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.395217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.395248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.395579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.395611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.395944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.395975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.396197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.396231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.396615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.396646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.396999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.397029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.397414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.507 [2024-11-20 16:25:24.397446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.507 qpair failed and we were unable to recover it. 00:30:48.507 [2024-11-20 16:25:24.397798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.397828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.398193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.398226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.398560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.398589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.398960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.398991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.399254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.399286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.399670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.399700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.400075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.400105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.400472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.400503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.400883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.400913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.401293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.401325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.401670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.401701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.402003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.402032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.402268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.402299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.402689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.402719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.403103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.403140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.403405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.403436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.403787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.403817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.404184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.404215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.404587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.404617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.404857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.404887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.405273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.405303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.405549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.405583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.405708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.405737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.406077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.406107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.406506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.406538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.406913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.406943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.407319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.407351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.407735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.407765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.408141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.408180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.408560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.408589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.408974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.409003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.409324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.409355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.409599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.409628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.409984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.410015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.508 qpair failed and we were unable to recover it. 00:30:48.508 [2024-11-20 16:25:24.410382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.508 [2024-11-20 16:25:24.410413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.784 qpair failed and we were unable to recover it. 00:30:48.784 [2024-11-20 16:25:24.410795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.784 [2024-11-20 16:25:24.410826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.784 qpair failed and we were unable to recover it. 00:30:48.784 [2024-11-20 16:25:24.411204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.784 [2024-11-20 16:25:24.411234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.784 qpair failed and we were unable to recover it. 00:30:48.784 [2024-11-20 16:25:24.411610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.784 [2024-11-20 16:25:24.411639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.784 qpair failed and we were unable to recover it. 00:30:48.784 [2024-11-20 16:25:24.412017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.784 [2024-11-20 16:25:24.412047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.784 qpair failed and we were unable to recover it. 00:30:48.784 [2024-11-20 16:25:24.412299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.784 [2024-11-20 16:25:24.412330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.784 qpair failed and we were unable to recover it. 00:30:48.784 [2024-11-20 16:25:24.412713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.784 [2024-11-20 16:25:24.412742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.784 qpair failed and we were unable to recover it. 00:30:48.784 [2024-11-20 16:25:24.413109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.784 [2024-11-20 16:25:24.413139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.784 qpair failed and we were unable to recover it. 00:30:48.784 [2024-11-20 16:25:24.413520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.784 [2024-11-20 16:25:24.413550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.784 qpair failed and we were unable to recover it. 00:30:48.784 [2024-11-20 16:25:24.413943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.784 [2024-11-20 16:25:24.413973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.784 qpair failed and we were unable to recover it. 00:30:48.784 [2024-11-20 16:25:24.414342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.784 [2024-11-20 16:25:24.414373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.784 qpair failed and we were unable to recover it. 00:30:48.784 [2024-11-20 16:25:24.414762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.784 [2024-11-20 16:25:24.414792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.784 qpair failed and we were unable to recover it. 00:30:48.784 [2024-11-20 16:25:24.415187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.784 [2024-11-20 16:25:24.415216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.784 qpair failed and we were unable to recover it. 00:30:48.784 [2024-11-20 16:25:24.415604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.784 [2024-11-20 16:25:24.415634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.784 qpair failed and we were unable to recover it. 00:30:48.784 [2024-11-20 16:25:24.416015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.784 [2024-11-20 16:25:24.416044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.784 qpair failed and we were unable to recover it. 00:30:48.784 [2024-11-20 16:25:24.416454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.784 [2024-11-20 16:25:24.416484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.784 qpair failed and we were unable to recover it. 00:30:48.784 [2024-11-20 16:25:24.416701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.784 [2024-11-20 16:25:24.416731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.784 qpair failed and we were unable to recover it. 00:30:48.784 [2024-11-20 16:25:24.417104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.784 [2024-11-20 16:25:24.417134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.784 qpair failed and we were unable to recover it. 00:30:48.784 [2024-11-20 16:25:24.417500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.784 [2024-11-20 16:25:24.417530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.784 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.417921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.417951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.418209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.418247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.418632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.418661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.419085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.419114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.419356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.419386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.419772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.419801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.420178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.420208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.420594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.420623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.421017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.421048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.421307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.421338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.421706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.421736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.422096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.422125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.422490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.422519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.422652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.422681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.423022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.423056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.423464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.423497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.423874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.423906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.424284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.424315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.424721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.424750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.425135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.425172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.425458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.425487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.425842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.425872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.426278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.426309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.426694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.426723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.427180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.427210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.427603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.427631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.428005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.428034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.428443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.428475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.428877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.428906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.429296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.429326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.429710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.429738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.430114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.430144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.785 qpair failed and we were unable to recover it. 00:30:48.785 [2024-11-20 16:25:24.430523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.785 [2024-11-20 16:25:24.430553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.430924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.430953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.431331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.431361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.431628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.431658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.432019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.432048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.432316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.432347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.432740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.432769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.433119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.433148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.433579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.433609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.433756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.433794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.434211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.434241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.434636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.434665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.435080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.435110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.435389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.435420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.435801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.435830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.436202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.436232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.436564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.436593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.436953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.436984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.437244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.437275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.437670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.437699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.438075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.438105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.438475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.438504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.438884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.438914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.439177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.439211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.439596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.439625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.440032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.440061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.440408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.440437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.440684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.440713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.440969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.441001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.441387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.786 [2024-11-20 16:25:24.441417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.786 qpair failed and we were unable to recover it. 00:30:48.786 [2024-11-20 16:25:24.441683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.441713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.442091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.442121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.442415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.442446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.442676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.442705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.443080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.443108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.443507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.443537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.443917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.443947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.444338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.444369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.444617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.444648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.444993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.445022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.445381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.445411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.445641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.445669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.446069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.446099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.446470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.446502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.446858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.446888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.447303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.447332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.447698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.447726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.448097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.448126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.448408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.448439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.448828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.448864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.449108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.449136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.449528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.449558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.449932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.449961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.450340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.450371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.450620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.450649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.450916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.450948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.451323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.451354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.451714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.451743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.452094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.452123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.787 qpair failed and we were unable to recover it. 00:30:48.787 [2024-11-20 16:25:24.452397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.787 [2024-11-20 16:25:24.452427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.452777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.452805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.453189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.453220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.453600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.453630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.454027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.454055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.454476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.454506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.454854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.454884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.455273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.455303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.455650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.455681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.456099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.456128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.456494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.456524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.456981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.457011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.457382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.457413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.457795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.457822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.458070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.458099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.458445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.458477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.458840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.458869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.459241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.459273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.459666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.459696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.459958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.459986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.460340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.460377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.460734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.460764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.461027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.461058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.461430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.461461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.461843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.461874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.462242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.462274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.462676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.462705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.463091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.463120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.463549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.463579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.463950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.463980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.464371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.788 [2024-11-20 16:25:24.464409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.788 qpair failed and we were unable to recover it. 00:30:48.788 [2024-11-20 16:25:24.464641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.464673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.464934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.464964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.465326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.465355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.465756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.465784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.466179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.466210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.466470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.466501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.466846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.466875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.467256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.467287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.467537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.467565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.467803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.467834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.468207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.468236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.468491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.468519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.468941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.468971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.469336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.469368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.469743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.469774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.470117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.470147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.470526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.470555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.470909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.470939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.471302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.471333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.471713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.471742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.472103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.472131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.472391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.472421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.472818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.472848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.473219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.473272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.473638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.473668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.474104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.474132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.474388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.474418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.474811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.474840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.789 qpair failed and we were unable to recover it. 00:30:48.789 [2024-11-20 16:25:24.475217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.789 [2024-11-20 16:25:24.475247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.475640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.475669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.476057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.476086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.476349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.476378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.476722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.476751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.477017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.477045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.477404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.477433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.477825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.477855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.478224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.478255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.478512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.478543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.478920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.478950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.479313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.479349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.479570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.479598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.479971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.480000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.480359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.480390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.480770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.480799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.481178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.481208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.481618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.481647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.482025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.482054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.482490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.482520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.482887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.482916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.483272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.483301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.483633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.483663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.790 [2024-11-20 16:25:24.483888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.790 [2024-11-20 16:25:24.483917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.790 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.484285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.484315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.484666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.484695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.485057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.485085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.485450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.485480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.485857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.485886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.486146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.486190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.486578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.486608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.486728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:48.791 [2024-11-20 16:25:24.487054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.487085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.487471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.487502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.487877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.487906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.488267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.488297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.488678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.488707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.489088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.489118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.489362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.489395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.489811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.489841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.490216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.490246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.490487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.490516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.490691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.490719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.491111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.491141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.491520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.491550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.491919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.491949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.492326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.492356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.492735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.492764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.493149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.493186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.493411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.493441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.493790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.493821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.494076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.494105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.494463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.494501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.494877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.494907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.495285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.495316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.495713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.495744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.791 [2024-11-20 16:25:24.495989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.791 [2024-11-20 16:25:24.496018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.791 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.496420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.496451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.496712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.496740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.497141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.497179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.497511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.497541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.497907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.497937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.498197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.498227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.498630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.498659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.499033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.499062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.499419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.499456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.499707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.499736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.500119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.500149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.500532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.500561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.500821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.500853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.501228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.501260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.501512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.501542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.501769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.501800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.502089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.502119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.502430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.502462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.502825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.502855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.503077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.503108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.503482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.503514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.503894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.503927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.504297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.504327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.504711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.504744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.505116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.505145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.505537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.505568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.505906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.505937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.506289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.506320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.506692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.506721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.507111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.507141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.792 qpair failed and we were unable to recover it. 00:30:48.792 [2024-11-20 16:25:24.507574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.792 [2024-11-20 16:25:24.507603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.507977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.508006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.508361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.508391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.508838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.508867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.509221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.509253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.509590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.509626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.509961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.509991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.510348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.510380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.510753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.510783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.511146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.511183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.511524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.511554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.511917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.511947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.512304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.512334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.512647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.512677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.513058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.513087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.513443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.513477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.513822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.513851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.514203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.514234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.514605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.514636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.515007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.515037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.515436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.515465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.515865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.515894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.516147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.516188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.516553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.516583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.516956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.516984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.517361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.517391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.517775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.517804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.518169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.518199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.518571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.518600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.518970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.518998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.519353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.519383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.519727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.519756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.793 [2024-11-20 16:25:24.520118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.793 [2024-11-20 16:25:24.520149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.793 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.520532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.520562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.520863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.520892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.521259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.521288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.521642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.521672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.522014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.522043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.522295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.522324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.522694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.522724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.523073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.523103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.523474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.523513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.523729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.523757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.524098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.524127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.524500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.524534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.524881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.524919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.525225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.525260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.525644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.525674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.526038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.526066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.526412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.526444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.526826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.526857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.527222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.527253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.527643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.527672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.528036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.528065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.528453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.528483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.528851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.528881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.529221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.529253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.529598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.529628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.529995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.530026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.530343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.530374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.530751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.530779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.531144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.531180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.794 qpair failed and we were unable to recover it. 00:30:48.794 [2024-11-20 16:25:24.531507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.794 [2024-11-20 16:25:24.531537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.531907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.531936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.532299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.532329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.532687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.532717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.533060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.533090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.533395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.533425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.533767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.533798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.534155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.534192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.534575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.534605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.534952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.534982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.535131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.535181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.535567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.535597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.535972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.536004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.536261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.536291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.536660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.536688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.537054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.537082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.537432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.537466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.537831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.537861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.538227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.538258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.538695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.538726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.539075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.539105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.539241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:48.795 [2024-11-20 16:25:24.539289] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:48.795 [2024-11-20 16:25:24.539297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:48.795 [2024-11-20 16:25:24.539304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:48.795 [2024-11-20 16:25:24.539310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:48.795 [2024-11-20 16:25:24.539476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.539513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.539881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.539909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.540174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.540205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.540554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.540584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.540843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.540871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.541256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.541288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.541632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.795 [2024-11-20 16:25:24.541501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:48.795 [2024-11-20 16:25:24.541668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.795 qpair failed and we were unable to recover it. 00:30:48.795 [2024-11-20 16:25:24.541663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:48.795 [2024-11-20 16:25:24.542015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.542043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.542345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:48.796 [2024-11-20 16:25:24.542433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.542348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:48.796 [2024-11-20 16:25:24.542472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.542873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.542903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.543284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.543315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.543678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.543708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.543850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.543888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.544188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.544220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.544597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.544628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.544972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.545003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.545403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.545435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.545804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.545833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.546185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.546217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.546535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.546565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.546930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.546959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.547205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.547234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.547616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.547645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.548024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.548053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.548293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.548325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.548718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.548748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.549016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.549045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.549464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.549496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.549842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.549872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.550247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.550277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.550617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.550647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.551017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.551047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.551396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.551427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.551785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.551817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.552183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.552214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.552558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.552588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.552824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.552853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.553133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.553172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.553524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.553553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.553893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.553924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.554302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.554334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.554626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.554655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.555010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.555039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.555419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.555450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.796 qpair failed and we were unable to recover it. 00:30:48.796 [2024-11-20 16:25:24.555804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.796 [2024-11-20 16:25:24.555835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.556195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.556225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.556519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.556546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.556920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.556948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.557318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.557348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.557698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.557728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.558097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.558125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.558475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.558507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.558872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.558908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.559138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.559177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.559541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.559569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.559945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.559973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.560257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.560288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.560656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.560685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.560915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.560944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.561312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.561343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.561580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.561612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.561968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.561997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.562359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.562390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.562773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.562803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.563209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.563243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.563594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.563623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.563873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.563902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.564157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.564200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.564578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.564608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.564826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.564856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.565254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.565286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.565630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.565660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.565885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.565918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.566287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.566319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.566572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.566601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.566825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.566854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.567229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.567261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.567500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.567530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.567882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.567912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.568306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.568338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.568700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.568731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.569090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.797 [2024-11-20 16:25:24.569120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.797 qpair failed and we were unable to recover it. 00:30:48.797 [2024-11-20 16:25:24.569475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.569509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.569886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.569918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.570290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.570322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.570549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.570579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.570875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.570905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.571265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.571295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.571661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.571698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.571993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.572023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.572375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.572406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.572650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.572678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.573062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.573097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.573520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.573551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.573829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.573859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.574211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.574242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.574614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.574642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.575021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.575051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.575415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.575445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.575820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.575849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.576222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.576254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.576362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.576424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.576762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.576792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.577002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.577030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.577417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.577448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.577636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.577667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.577826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.577855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.578003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.578032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.578403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.578435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.578790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.578819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.579031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.579060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.579428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.579458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.579718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.579750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.580134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.580172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.580519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.580548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.580917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.580947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.581313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.581344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.581720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.581749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.582089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.582119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.582471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.798 [2024-11-20 16:25:24.582502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.798 qpair failed and we were unable to recover it. 00:30:48.798 [2024-11-20 16:25:24.582755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.582784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.583130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.583169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.583535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.583564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.583814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.583843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.584220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.584251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.584604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.584634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.584851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.584881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.585234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.585263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.585613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.585643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.585988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.586017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.586304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.586333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.586719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.586748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.587007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.587045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.587410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.587440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.587666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.587695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.588017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.588046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.588426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.588457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.588821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.588850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.589221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.589251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.589638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.589667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.590026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.590055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.590273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.590303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.590622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.590651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.590996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.591025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.591252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.591281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.591537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.591565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.591917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.591947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.592327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.592358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.592725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.592755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.593154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.593194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.593575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.593606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.593968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.593999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.594386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.594416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.594759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.594788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.595197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.595227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.595614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.595643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.595900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.595932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.799 [2024-11-20 16:25:24.596315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.799 [2024-11-20 16:25:24.596346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.799 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.596701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.596730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.597061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.597091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.597470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.597501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.597732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.597767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.598099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.598130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.598489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.598519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.598761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.598790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.599165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.599197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.599563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.599592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.599906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.599935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.600080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.600110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.600484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.600514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.600873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.600901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.601146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.601183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.601543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.601572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.601931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.601960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.602335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.602366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.602726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.602755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.603124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.603152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.603367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.603396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.603640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.603669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.604058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.604088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.604436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.604468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.604801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.604838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.605178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.605208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.605551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.605580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.605937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.605968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.606332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.606363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.606625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.606657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.607001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.607032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.607385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.607415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.607786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.607816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.608219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.800 [2024-11-20 16:25:24.608248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.800 qpair failed and we were unable to recover it. 00:30:48.800 [2024-11-20 16:25:24.608627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.608663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.609020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.609049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.609394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.609424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.609678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.609707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.610064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.610094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.610430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.610460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.610876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.610907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.611154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.611191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.611562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.611600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.611960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.611989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.612378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.612408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.612791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.612820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.613178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.613208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.613448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.613477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.613716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.613746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.614010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.614039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.614133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.614170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.614753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.614873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.615337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.615404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.615764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.615797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.616119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.616149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.616640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.616741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.617199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.617240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.617622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.617653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.617951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.617981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.618335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.618368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.618578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.618607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.618984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.619012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.619382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.619412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.619773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.619804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.620173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.620203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.620425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.620455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.620780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.620817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.621074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.621107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.621499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.621530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.621857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.621889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.622234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.622265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.622630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.622664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.801 qpair failed and we were unable to recover it. 00:30:48.801 [2024-11-20 16:25:24.623057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.801 [2024-11-20 16:25:24.623088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.623463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.623494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.623768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.623796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.624239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.624269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.624652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.624681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.625049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.625076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.625304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.625334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.625712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.625741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.625954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.625982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.626273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.626303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.626674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.626704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.626958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.626987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.627292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.627322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.627570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.627599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.627845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.627874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.628230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.628260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.628582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.628610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.628988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.629018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.629363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.629393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.629761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.629790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.630051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.630088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.630325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.630357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.630757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.630787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.631018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.631047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.631309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.631347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.631711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.631741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.632118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.632148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.632374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.632404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.632813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.632845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.633188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.633226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.633534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.633562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.633809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.633842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.634051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.634081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.634294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.634324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.634701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.634730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.634943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.634973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.635183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.635214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.635609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.635637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.802 [2024-11-20 16:25:24.635850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.802 [2024-11-20 16:25:24.635880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.802 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.636089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.636118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.636496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.636526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.636878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.636907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.637204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.637234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.637599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.637628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.637944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.637974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.638314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.638345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.638614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.638642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.638878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.638906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.639233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.639263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.639365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.639394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.639751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.639780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.640155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.640198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.640445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.640474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.640847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.640875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.641217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.641254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.641490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.641518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.641884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.641913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.642170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.642200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.642582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.642611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.643006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.643034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.643398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.643429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.643639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.643668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.643964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.643992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.644372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.644402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.644649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.644676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.644901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.644931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.645289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.645319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.645660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.645688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.646024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.646053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.646407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.646436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.646671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.646699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.647049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.647078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.647397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.647427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.647818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.647847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.648062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.648090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.648419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.648448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.648760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.803 [2024-11-20 16:25:24.648790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.803 qpair failed and we were unable to recover it. 00:30:48.803 [2024-11-20 16:25:24.649007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.649036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.649441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.649472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.649833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.649864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.649990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.650023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.650382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.650412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.650714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.650743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.650987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.651016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.651237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.651267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.651594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.651623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.651753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.651781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.651877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.651904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.652309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.652338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.652676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.652706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.653100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.653129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.653541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.653572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.653835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.653865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.654194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.654225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.654565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.654596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.654966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.654994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.655231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.655261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.655509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.655537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.655828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.655857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.656105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.656133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.656482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.656514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.656928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.656957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.657295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.657326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.657699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.657728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.658107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.658136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.658555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.658585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.659022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.659052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.659440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.659472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.659873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.659902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.804 [2024-11-20 16:25:24.660249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.804 [2024-11-20 16:25:24.660279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.804 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.660516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.660548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.660893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.660924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.661149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.661187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.661375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.661407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.661784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.661813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.662033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.662060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.662409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.662441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.662664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.662692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.663057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.663086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.663323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.663360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.663650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.663679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.663905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.663933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.664298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.664327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.664706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.664734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.665111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.665139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.665572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.665601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.665958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.665986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.666388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.666418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.666782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.666811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.667035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.667063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.667174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.667202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x152e0c0 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.667624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.667732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.668180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.668220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.668786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.668893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.669232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.669298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.669614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.669648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.670024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.670054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.670368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.670398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.670624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.670656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.670917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.670947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.671176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.671207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.671441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.671471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.671832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.671861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.672235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.672267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.672642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.672671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.673039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.673068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.673500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.673532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.673878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.805 [2024-11-20 16:25:24.673907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.805 qpair failed and we were unable to recover it. 00:30:48.805 [2024-11-20 16:25:24.674134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.674169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.674424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.674453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.674816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.674845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.675216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.675246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.675614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.675645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.675795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.675825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.676201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.676231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.676571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.676600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.676968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.676997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.677097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.677127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.677430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.677460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.677694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.677730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.678097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.678126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.678355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.678385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.678761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.678790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.679123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.679152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.679540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.679568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.679781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.679810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.680034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.680063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.680401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.680431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.680664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.680693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.681045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.681075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.681459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.681489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.681695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.681723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.682103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.682132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.682477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.682507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.682870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.682898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.683230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.683261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.683660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.683689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.684052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.684082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.684447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.684477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.684845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.684874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.685231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.685261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.685488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.685517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.685939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.685967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.686187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.686217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.686439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.686468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.806 [2024-11-20 16:25:24.686833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.806 [2024-11-20 16:25:24.686861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.806 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.687090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.687119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.687372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.687402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.687767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.687796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.687898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.687928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.688276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.688307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.688700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.688729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.689093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.689123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.689508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.689539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.689921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.689949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.690308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.690338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.690708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.690738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.691107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.691136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.691572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.691601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.691950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.691986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.692325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.692356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.692715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.692744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.692981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.693009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.693410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.693440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.693810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.693840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.694201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.694232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.694564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.694592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.694903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.694931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.695196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.695229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.695565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.695596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.695962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.695992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.696322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.696353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.696738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.696768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.697136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.697175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.697501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.697531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 [2024-11-20 16:25:24.697628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.807 [2024-11-20 16:25:24.697656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:48.807 qpair failed and we were unable to recover it. 00:30:48.807 Read completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Read completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Read completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Read completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Read completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Read completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Read completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Read completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Read completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Read completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Read completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Write completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Write completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Write completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Write completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Write completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Read completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Write completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Read completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Read completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Read completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Write completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Read completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Write completed with error (sct=0, sc=8) 00:30:48.807 starting I/O failed 00:30:48.807 Read completed with error (sct=0, sc=8) 00:30:48.808 starting I/O failed 00:30:48.808 Read completed with error (sct=0, sc=8) 00:30:48.808 starting I/O failed 00:30:48.808 Write completed with error (sct=0, sc=8) 00:30:48.808 starting I/O failed 00:30:48.808 Read completed with error (sct=0, sc=8) 00:30:48.808 starting I/O failed 00:30:48.808 Read completed with error (sct=0, sc=8) 00:30:48.808 starting I/O failed 00:30:48.808 Read completed with error (sct=0, sc=8) 00:30:48.808 starting I/O failed 00:30:48.808 Read completed with error (sct=0, sc=8) 00:30:48.808 starting I/O failed 00:30:48.808 Read completed with error (sct=0, sc=8) 00:30:48.808 starting I/O failed 00:30:48.808 [2024-11-20 16:25:24.698468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:48.808 [2024-11-20 16:25:24.698900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.808 [2024-11-20 16:25:24.698963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:48.808 qpair failed and we were unable to recover it. 00:30:48.808 [2024-11-20 16:25:24.699446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.808 [2024-11-20 16:25:24.699549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:48.808 qpair failed and we were unable to recover it. 00:30:48.808 [2024-11-20 16:25:24.700010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.808 [2024-11-20 16:25:24.700048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:48.808 qpair failed and we were unable to recover it. 00:30:48.808 [2024-11-20 16:25:24.700451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.808 [2024-11-20 16:25:24.700571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:48.808 qpair failed and we were unable to recover it. 00:30:48.808 [2024-11-20 16:25:24.701035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.808 [2024-11-20 16:25:24.701072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:48.808 qpair failed and we were unable to recover it. 00:30:48.808 [2024-11-20 16:25:24.701449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.808 [2024-11-20 16:25:24.701482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:48.808 qpair failed and we were unable to recover it. 00:30:48.808 [2024-11-20 16:25:24.701856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.808 [2024-11-20 16:25:24.701886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:48.808 qpair failed and we were unable to recover it. 00:30:48.808 [2024-11-20 16:25:24.702247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.808 [2024-11-20 16:25:24.702278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:48.808 qpair failed and we were unable to recover it. 00:30:48.808 [2024-11-20 16:25:24.702669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.808 [2024-11-20 16:25:24.702698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:48.808 qpair failed and we were unable to recover it. 00:30:48.808 [2024-11-20 16:25:24.703083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.808 [2024-11-20 16:25:24.703112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:48.808 qpair failed and we were unable to recover it. 00:30:48.808 [2024-11-20 16:25:24.703525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.808 [2024-11-20 16:25:24.703557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:48.808 qpair failed and we were unable to recover it. 00:30:48.808 [2024-11-20 16:25:24.703913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.808 [2024-11-20 16:25:24.703942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:48.808 qpair failed and we were unable to recover it. 00:30:48.808 [2024-11-20 16:25:24.704307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.808 [2024-11-20 16:25:24.704337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:48.808 qpair failed and we were unable to recover it. 00:30:48.808 [2024-11-20 16:25:24.704619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.704653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.704985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.705017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.705356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.705386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.705762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.705791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.706179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.706210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.706566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.706596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.706820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.706849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.707017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.707046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.707268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.707297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.707512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.707542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.707891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.707920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.708292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.708322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.708690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.708718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.709100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.709129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.709390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.709420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.709776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.709806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.710221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.710250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.710627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.710656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.711006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.711035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.711389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.711418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.711653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.711682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.712065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.712094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.712466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.712497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.712863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.712892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.713245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.713276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.713508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.081 [2024-11-20 16:25:24.713537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.081 qpair failed and we were unable to recover it. 00:30:49.081 [2024-11-20 16:25:24.713671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.713699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.713989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.714019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.714256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.714286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.714621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.714651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.715017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.715056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.715411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.715441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.715661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.715689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.715901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.715930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.716307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.716339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.716700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.716729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.716948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.716977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.717340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.717371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.717711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.717740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.718102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.718130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.718530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.718560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.718905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.718934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.719297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.719327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.719696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.719724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.720098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.720128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.720373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.720403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.720750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.720778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.720999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.721028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.721445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.721475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.721824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.721853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.722058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.722086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.722310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.722339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.722714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.722742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.722968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.722997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.723204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.723235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.723592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.723620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.723853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.723881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.724267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.724297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.724641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.724671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.724900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.724929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.725270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.725300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.725673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.082 [2024-11-20 16:25:24.725702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.082 qpair failed and we were unable to recover it. 00:30:49.082 [2024-11-20 16:25:24.725881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.725913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.726276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.726306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.726651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.726679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.727128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.727156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.727416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.727449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.727664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.727693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.728068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.728097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.728466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.728496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.728879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.728915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.729265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.729297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.729674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.729704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.730028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.730057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.730450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.730479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.730804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.730834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.731081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.731109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.731400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.731429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.731667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.731699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.732049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.732079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.732426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.732456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.732827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.732855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.733233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.733263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.733592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.733621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.734004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.734033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.734420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.734449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.734692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.734721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.734942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.734971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.735224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.735255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.735627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.735655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.736027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.736055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.736422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.736452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.736844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.736873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.737240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.737270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.737632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.737661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.738024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.738053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.738488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.738518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.738775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.738804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.739057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.739086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.083 qpair failed and we were unable to recover it. 00:30:49.083 [2024-11-20 16:25:24.739463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.083 [2024-11-20 16:25:24.739493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.739854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.739883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.740241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.740270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.740654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.740683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.741040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.741069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.741419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.741449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.741810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.741839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.742215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.742245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.742623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.742653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.743020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.743048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.743297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.743330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.743655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.743691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.744012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.744041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.744390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.744420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.744788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.744817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.745031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.745059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.745434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.745464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.745815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.745845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.746070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.746099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.746344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.746373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.746615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.746647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.747007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.747037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.747379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.747409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.747785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.747813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.748177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.748207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.748565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.748594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.748937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.748965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.749289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.749320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.749672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.749700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.750072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.750100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.750474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.750504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.750829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.750858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.751205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.751237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.751607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.751635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.751864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.084 [2024-11-20 16:25:24.751892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.084 qpair failed and we were unable to recover it. 00:30:49.084 [2024-11-20 16:25:24.752223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.752252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.752477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.752505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.752763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.752792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.753178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.753210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.753423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.753453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.753798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.753828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.754200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.754231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.754599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.754627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.754992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.755020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.755389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.755418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.755774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.755803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.756169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.756199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.756591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.756619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.756857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.756887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.757245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.757275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.757517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.757549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.757770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.757807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.758170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.758202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.758537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.758566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.758942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.758971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.759190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.759221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.759593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.759622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.759989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.760018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.760285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.760315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.760535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.760567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.760782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.760810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.761190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.761222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.761595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.761624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.761875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.761904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.762245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.762276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.762666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.762696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.762909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.762937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.763248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.763278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.763640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.763670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.764045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.085 [2024-11-20 16:25:24.764073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.085 qpair failed and we were unable to recover it. 00:30:49.085 [2024-11-20 16:25:24.764187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.764219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.764658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.764687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.765051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.765079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.765439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.765469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.765699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.765728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.766078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.766106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.766511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.766541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.766906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.766934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.767192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.767223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.767597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.767625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.767842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.767870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.768078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.768107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.768355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.768384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.768705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.768733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.769115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.769144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.769503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.769533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.769757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.769786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.769999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.770027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.770371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.770403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.770752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.770782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.770925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.770955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.771304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.771341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.771692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.771720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.772083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.772111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.772421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.772451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.772680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.772708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.773076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.773104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.773518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.773547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.773911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.773940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.774292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.774322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.774680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.774709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.086 [2024-11-20 16:25:24.775024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.086 [2024-11-20 16:25:24.775062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.086 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.775185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.775216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.775543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.775572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.775938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.775966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.776334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.776365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.776682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.776711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.777098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.777127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.777427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.777456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.777806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.777834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.778207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.778237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.778470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.778498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.778812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.778841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.779220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.779250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.779612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.779641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.779869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.779898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.780235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.780271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.780627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.780656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.781017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.781047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.781358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.781388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.781623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.781652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.781889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.781918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.782240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.782271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.782627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.782656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.782898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.782926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.783294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.783325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.783689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.783717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.784071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.784100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.784468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.784498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.784874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.784902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.785108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.785136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.785555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.785593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.785941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.785970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.786320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.786351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.786721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.786750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.787127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.787155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.787531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.787561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.787919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.787948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.788311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.087 [2024-11-20 16:25:24.788340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.087 qpair failed and we were unable to recover it. 00:30:49.087 [2024-11-20 16:25:24.788686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.788715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.789077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.789106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.789468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.789497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.789850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.789879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.790084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.790113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.790502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.790533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.790879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.790909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.791278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.791310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.791639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.791669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.791905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.791934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.792183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.792213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.792586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.792615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.792956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.792986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.793353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.793382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.793749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.793778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.794007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.794035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.794301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.794333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.794683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.794712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.794955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.794983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.795328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.795360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.795720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.795748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.796127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.796156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.796550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.796579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.796927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.796963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.797332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.797362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.797722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.797751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.798084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.798112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.798500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.798530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.798888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.798917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.799145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.799190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.799550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.799579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.799792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.799820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.800192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.800229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.800564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.800595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.800817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.800845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.801189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.801221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.801569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.801598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.088 [2024-11-20 16:25:24.801962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.088 [2024-11-20 16:25:24.801991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.088 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.802369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.802399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.802728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.802756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.803023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.803051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.803419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.803449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.803789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.803817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.804150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.804208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.804574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.804603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.804934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.804963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.805332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.805363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.805574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.805602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.805843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.805872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.806233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.806263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.806627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.806656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.807021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.807049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.807447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.807477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.807834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.807863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.808235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.808265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.808484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.808512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.808884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.808912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.809279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.809310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.809493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.809522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.809936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.809966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.810190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.810221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.810604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.810632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.810835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.810863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.811225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.811255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.811591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.811620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.811982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.812011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.812370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.812401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.812618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.812646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.812965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.812994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.813371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.813402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.813769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.813799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.814178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.814209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.814416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.814451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.814811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.814839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.815217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.815247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.089 qpair failed and we were unable to recover it. 00:30:49.089 [2024-11-20 16:25:24.815628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.089 [2024-11-20 16:25:24.815657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.816039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.816067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.816489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.816519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.816973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.817002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.817240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.817269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.817650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.817679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.818028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.818057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.818389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.818419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.818559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.818587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.818932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.818961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.819318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.819349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.819706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.819735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.819949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.819977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.820239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.820269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.820509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.820537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.820883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.820912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.821263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.821293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.821524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.821552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.821947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.821976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.822377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.822407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.822664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.822692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.823068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.823097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.823493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.823524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.823743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.823772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.824127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.824156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.824517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.824547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.824917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.824946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.825250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.825280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.825376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.825404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.825731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.825761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.826107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.826135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.826511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.826541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.826968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.826997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.090 [2024-11-20 16:25:24.827287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.090 [2024-11-20 16:25:24.827317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.090 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.827653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.091 [2024-11-20 16:25:24.827682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.091 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.828054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.091 [2024-11-20 16:25:24.828083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.091 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.828452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.091 [2024-11-20 16:25:24.828482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.091 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.828852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.091 [2024-11-20 16:25:24.828883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.091 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.829222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.091 [2024-11-20 16:25:24.829253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.091 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.829607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.091 [2024-11-20 16:25:24.829644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.091 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.830017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.091 [2024-11-20 16:25:24.830045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.091 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.830355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.091 [2024-11-20 16:25:24.830385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.091 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.830762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.091 [2024-11-20 16:25:24.830791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.091 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.831146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.091 [2024-11-20 16:25:24.831186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.091 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.831319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.091 [2024-11-20 16:25:24.831347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.091 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.831688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.091 [2024-11-20 16:25:24.831717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.091 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.831943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.091 [2024-11-20 16:25:24.831972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.091 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.832327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.091 [2024-11-20 16:25:24.832359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.091 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.832573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.091 [2024-11-20 16:25:24.832603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.091 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.832981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.091 [2024-11-20 16:25:24.833010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.091 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.833383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.091 [2024-11-20 16:25:24.833414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.091 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.833636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.091 [2024-11-20 16:25:24.833664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.091 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.833993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.091 [2024-11-20 16:25:24.834022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.091 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.834281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.091 [2024-11-20 16:25:24.834313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.091 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.834690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.091 [2024-11-20 16:25:24.834719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.091 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.835063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.091 [2024-11-20 16:25:24.835094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.091 qpair failed and we were unable to recover it. 00:30:49.091 [2024-11-20 16:25:24.835447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.835477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.835835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.835865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.836228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.836258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.836628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.836657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.836893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.836921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.837129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.837168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.837375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.837405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.837776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.837805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.838179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.838217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.838593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.838621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.838895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.838924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.839296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.839327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.839701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.839730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.839999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.840028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.840403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.840434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.840802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.840830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.841059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.841087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.841345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.841378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.841756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.841786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.842152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.842191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.842553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.842582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.842821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.842850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.843191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.843221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.843594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.843624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.843979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.844015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.844235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.844264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.844496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.844525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.844903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.844932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.845279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.845310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.845678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.845707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.845935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.845964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.846217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.846246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.846627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.846656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.847034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.847063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.847281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.847313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.847545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.847575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.847829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.847858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.092 [2024-11-20 16:25:24.848209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.092 [2024-11-20 16:25:24.848241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.092 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.848391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.848422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.848793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.848823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.849182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.849212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.849580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.849609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.849840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.849872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.850324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.850354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.850716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.850744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.850996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.851025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.851407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.851439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.851813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.851843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.852214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.852251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.852634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.852665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.853052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.853081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.853328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.853359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.853727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.853756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.854126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.854154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.854427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.854457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.854676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.854707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.854947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.854979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.855240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.855275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.855639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.855668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.855923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.855952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.856306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.856337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.856705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.856734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.857013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.857042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.857426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.857457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.857709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.857737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.858096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.858124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.858375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.858405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.858648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.858677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.859026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.859055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.859415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.859447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.859656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.859684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.860057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.860086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.860367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.860397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.860774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.860803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.861263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.093 [2024-11-20 16:25:24.861292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.093 qpair failed and we were unable to recover it. 00:30:49.093 [2024-11-20 16:25:24.861668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.861698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.862062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.862092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.862323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.862354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.862714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.862750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.862978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.863008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.863342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.863374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.863598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.863627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.864002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.864031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.864420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.864461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.864822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.864851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.865064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.865092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.865382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.865412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.865779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.865807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.866210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.866246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.866603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.866632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.866755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.866787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.867180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.867211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.867533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.867562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.867983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.868012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.868340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.868370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.868760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.868789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.869157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.869209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.869552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.869581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.869958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.869987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.870374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.870406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.870761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.870790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.871019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.871048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.871259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.871290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.871653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.871682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.872049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.872077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.872540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.872570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.872927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.872956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.873254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.094 [2024-11-20 16:25:24.873285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.094 qpair failed and we were unable to recover it. 00:30:49.094 [2024-11-20 16:25:24.873651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.873680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.874045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.874073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.874440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.874471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.874837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.874866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.875235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.875265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.875635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.875664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.875985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.876014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.876348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.876380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.876720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.876748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.877131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.877171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.877502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.877531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.877878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.877908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.878292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.878322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.878572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.878600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.878996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.879025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.879408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.879438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.879806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.879834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.880047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.880076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.880394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.880424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.880807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.880836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.881221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.881263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.881603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.881632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.881858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.881887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.882279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.882309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.882681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.882711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.882933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.882963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.883322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.883353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.883599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.883628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.883875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.883904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.884338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.884369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.095 qpair failed and we were unable to recover it. 00:30:49.095 [2024-11-20 16:25:24.884584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.095 [2024-11-20 16:25:24.884613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.884964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.884995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.885331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.885362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.885720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.885750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.886129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.886170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.886510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.886541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.886769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.886798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.887143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.887181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.887453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.887485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.887892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.887922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.888155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.888194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.888477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.888507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.888856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.888885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.889150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.889189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.889554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.889584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.889952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.889980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.890318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.890348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.890608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.890638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.890987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.891023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.891393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.891424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.891806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.891835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.892184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.892213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.892573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.892602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.892966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.892996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.893345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.893376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.893736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.893766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.893967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.893996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.894221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.894251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.894492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.894522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.894900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.894929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.895279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.895316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.895677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.895707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.895928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.895956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.096 qpair failed and we were unable to recover it. 00:30:49.096 [2024-11-20 16:25:24.896173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.096 [2024-11-20 16:25:24.896204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.896458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.896487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.896845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.896874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.897240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.897270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.897667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.897697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.897912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.897941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.898310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.898341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.898719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.898748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.898956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.898983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.899360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.899391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.899741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.899771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.900119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.900149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.900505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.900534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.900896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.900925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.901195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.901227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.901461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.901489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.901828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.901858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.902060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.902089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.902478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.902509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.902856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.902885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.903136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.903172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.903389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.903417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.903808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.903836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.904177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.904209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.904561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.904591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.904860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.904892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.905250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.905282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.905638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.905667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.906038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.906067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.906422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.906460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.906832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.906860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.907202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.907233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.907602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.907631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.908001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.908030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.908138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.908196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.097 [2024-11-20 16:25:24.909636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.097 [2024-11-20 16:25:24.909694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.097 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.910051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.910086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.910307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.910346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.910703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.910733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.911111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.911141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.911508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.911538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.911911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.911942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.912289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.912323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.912675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.912705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.913065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.913094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.913473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.913503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.913874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.913904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.914255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.914288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.914629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.914666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.914912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.914941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.915285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.915316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.915544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.915573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.915950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.915979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.916193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.916222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.916597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.916627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.916957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.916985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.917358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.917388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.917849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.917878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.918194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.918225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.918567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.918596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.918815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.918843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.919088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.919117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.919486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.919516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.919873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.919902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.920146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.920186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.920562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.920592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.920950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.098 [2024-11-20 16:25:24.920978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.098 qpair failed and we were unable to recover it. 00:30:49.098 [2024-11-20 16:25:24.921362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.921392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.921643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.921676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.922067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.922096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.922514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.922545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.922926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.922954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.923322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.923356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.923735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.923766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.924128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.924165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.924530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.924560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.924656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.924684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.924900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.924937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.925291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.925324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.925689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.925719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.925940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.925969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.926187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.926218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.926566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.926596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.927002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.927031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.927193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.927226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.927639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.927669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.928020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.928049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.928275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.928305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.928666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.928698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.929064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.929093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.929464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.929495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.929734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.929767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.930108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.930140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.930502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.930533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.930916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.930946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.931329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.931359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.931710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.931740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.931968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.931999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.932350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.932381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.932749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.932780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.933001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.933031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.933299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.933333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.933549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.933579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.933739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.933768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.934012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.099 [2024-11-20 16:25:24.934042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.099 qpair failed and we were unable to recover it. 00:30:49.099 [2024-11-20 16:25:24.934278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.934310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.934647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.934677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.935064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.935094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.935352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.935383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.935722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.935752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.936113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.936142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.936427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.936460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.936838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.936867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.937237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.937268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.937636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.937665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.938042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.938071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.938295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.938325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.938559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.938599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.938944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.938974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.939206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.939241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.939543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.939573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.939932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.939961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.940317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.940348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.940736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.940765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.941123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.941153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.941529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.941558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.941917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.941946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.942330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.942360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.942703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.942735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.942949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.942977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.943342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.943373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.943732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.943762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.944123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.944152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.944531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.944561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.944968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.100 [2024-11-20 16:25:24.944997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.100 qpair failed and we were unable to recover it. 00:30:49.100 [2024-11-20 16:25:24.945275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.945305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.945661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.945691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.945916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.945945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.946298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.946328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.946701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.946730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.947095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.947125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.947468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.947498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.947872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.947901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.948239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.948270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.948609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.948640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.949003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.949032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.949426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.949456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.949829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.949858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.950224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.950254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.950620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.950649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.951039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.951068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.951414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.951445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.951817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.951846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.952205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.952236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.952645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.952674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.953045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.953075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.953190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.953223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.953554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.953591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.953962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.953990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.954357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.954388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.954727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.954756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.955115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.955144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.955509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.955538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.955903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.955931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.956284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.956315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.956683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.956712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.957080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.957108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.957347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.957378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.957628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.957656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.958018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.958047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.101 [2024-11-20 16:25:24.958403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.101 [2024-11-20 16:25:24.958433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.101 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.958789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.958818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.959209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.959239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.959602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.959631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.959845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.959874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.960111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.960139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.960511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.960541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.960797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.960825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.961181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.961212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.961560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.961589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.961703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.961735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.961841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.961870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.962213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.962243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.962537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.962565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.962920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.962950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.963317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.963346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.963716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.963744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.963955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.963984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.964251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.964282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.964638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.964667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.965032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.965061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.965432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.965463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.965850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.965879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.966243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.966272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.966657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.966686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.966900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.966930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.967290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.967320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.967688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.967724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.967945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.967974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.968318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.968356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.968692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.968721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.968935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.968964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.969333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.969364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.969590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.969619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.970011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.970040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.970389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.970419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.970771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.970800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.971153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.971191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.102 qpair failed and we were unable to recover it. 00:30:49.102 [2024-11-20 16:25:24.971549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.102 [2024-11-20 16:25:24.971579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.971943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.971972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.972350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.972381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.972614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.972647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.973010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.973040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.973382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.973413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.973675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.973703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.974047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.974076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.974464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.974493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.974843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.974873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.975098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.975128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.975485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.975515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.975863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.975893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.976243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.976274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.976660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.976689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.977076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.977105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.977394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.977427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.977668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.977697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.978073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.978102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.978461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.978499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.978827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.978856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.979078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.979106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.979505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.979537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.979891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.979920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.980149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.980193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.980558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.980587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.980961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.980990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.981364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.981394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.981773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.981802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.982023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.982059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.982345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.982375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.982740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.982769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.983103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.983133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.983503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.983532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.983901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.983931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.984303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.984334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.984700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.984729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.984972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.103 [2024-11-20 16:25:24.985001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.103 qpair failed and we were unable to recover it. 00:30:49.103 [2024-11-20 16:25:24.985331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.985363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.985589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.985621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.985843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.985873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.986238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.986268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.986619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.986649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.987026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.987056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.987396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.987427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.987797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.987826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.988030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.988059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.988154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.988191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.988695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.988801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.989440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.989547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.989999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.990035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.990506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.990613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.991005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.991038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.991298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.991328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.991476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.991509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.991844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.991874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1840000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.992041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.992087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.992492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.992526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.992899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.992928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.993312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.993343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.993689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.993718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.994082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.994111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.994373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.994403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.994771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.994801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.995173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.995203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.995519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.995547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.995774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.995803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.996199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.996229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.996445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.996475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.996739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.996782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.997028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.997056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.997396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.997428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.997816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.997845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.998204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.998236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.998654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.998683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.999054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.104 [2024-11-20 16:25:24.999083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.104 qpair failed and we were unable to recover it. 00:30:49.104 [2024-11-20 16:25:24.999309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.105 [2024-11-20 16:25:24.999339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.105 qpair failed and we were unable to recover it. 00:30:49.105 [2024-11-20 16:25:24.999616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.105 [2024-11-20 16:25:24.999645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.105 qpair failed and we were unable to recover it. 00:30:49.105 [2024-11-20 16:25:25.000016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.105 [2024-11-20 16:25:25.000046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.105 qpair failed and we were unable to recover it. 00:30:49.105 [2024-11-20 16:25:25.000270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.105 [2024-11-20 16:25:25.000300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.105 qpair failed and we were unable to recover it. 00:30:49.105 [2024-11-20 16:25:25.000672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.105 [2024-11-20 16:25:25.000701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.105 qpair failed and we were unable to recover it. 00:30:49.105 [2024-11-20 16:25:25.001068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.105 [2024-11-20 16:25:25.001097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.105 qpair failed and we were unable to recover it. 00:30:49.105 [2024-11-20 16:25:25.001474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.105 [2024-11-20 16:25:25.001504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.105 qpair failed and we were unable to recover it. 00:30:49.105 [2024-11-20 16:25:25.001865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.105 [2024-11-20 16:25:25.001894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.105 qpair failed and we were unable to recover it. 00:30:49.105 [2024-11-20 16:25:25.002133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.105 [2024-11-20 16:25:25.002169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.105 qpair failed and we were unable to recover it. 00:30:49.105 [2024-11-20 16:25:25.002418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.105 [2024-11-20 16:25:25.002447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.105 qpair failed and we were unable to recover it. 00:30:49.105 [2024-11-20 16:25:25.002804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.105 [2024-11-20 16:25:25.002834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.105 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.003188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.003220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.003645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.003681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.004048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.004077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.004303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.004332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.004579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.004608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.004972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.005001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.005387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.005418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.005729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.005759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.006107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.006137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.006517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.006549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.006777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.006806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.006940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.006970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.007506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.007632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.008123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.008206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.008745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.008851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.009419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.009526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.009811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.009850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.010415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.010521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.010934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.010970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.011227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.011281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.011677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.011706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.012106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.012136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.012481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.012524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.012627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.012655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.012903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.012931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.013278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.013308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.013510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.013538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.013899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.013928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.014180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.014212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.379 [2024-11-20 16:25:25.014638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.379 [2024-11-20 16:25:25.014667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.379 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.015041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.015078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.015383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.015413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.015781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.015809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.016181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.016210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.016656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.016684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.017048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.017077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.017500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.017532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.017789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.017824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.018168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.018200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.018587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.018617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.018997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.019025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.019404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.019433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.019664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.019693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.020070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.020098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.020472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.020502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.020753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.020781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.020996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.021025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.021369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.021399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.021769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.021799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.022012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.022048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.022385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.022415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.022786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.022814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.023189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.023220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.023593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.023623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.023979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.024008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.024277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.024310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.024685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.024714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.025079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.025109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.025550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.025581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.025916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.025944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.026299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.026329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.380 [2024-11-20 16:25:25.026712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.380 [2024-11-20 16:25:25.026740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.380 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.027132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.027176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.027553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.027583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.027826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.027855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.028182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.028213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.028527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.028557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.028933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.028963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.029322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.029352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.029711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.029740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.030143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.030182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.030517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.030546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.030930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.030959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.031346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.031375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.031759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.031788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.032153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.032190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.032559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.032589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.032961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.032991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.033367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.033398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.033749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.033779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.034002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.034030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.034334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.034363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.034725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.034754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.034989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.035018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.035228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.035259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.035654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.035683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.035898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.035926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.036337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.036367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.036738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.036768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.037090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.037124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.037500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.037530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.381 [2024-11-20 16:25:25.037776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.381 [2024-11-20 16:25:25.037805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.381 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.038173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.038203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.038548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.038578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.038948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.038977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.039213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.039242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.039628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.039657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.040043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.040074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.040416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.040447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.040809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.040838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.041200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.041231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.041607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.041635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.042018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.042046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.042337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.042368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.042711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.042739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.043121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.043149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.043514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.043543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.043764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.043793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.044115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.044145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.044363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.044393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.044736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.044766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.045112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.045141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.045507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.045536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.045901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.045930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.046292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.046324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.046652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.046680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.046922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.046951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.047317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.047346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.047703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.382 [2024-11-20 16:25:25.047732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.382 qpair failed and we were unable to recover it. 00:30:49.382 [2024-11-20 16:25:25.048099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.048127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.048492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.048522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.048746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.048775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.048989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.049018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.049391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.049421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.049766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.049795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.050030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.050062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.050421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.050452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.050738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.050767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.050993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.051022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.051358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.051396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.051771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.051800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.052019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.052047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.052413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.052443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.052804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.052833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.053195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.053224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.053627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.053656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.054016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.054045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.054265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.054294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.054614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.054643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.054915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.054944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.055305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.055335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.055723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.055753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.056104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.056133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.056517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.056547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.056900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.056930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.057194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.057225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.057598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.057629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.057843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.383 [2024-11-20 16:25:25.057871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.383 qpair failed and we were unable to recover it. 00:30:49.383 [2024-11-20 16:25:25.058141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.058179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.058431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.058459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.058820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.058848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.059100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.059128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.059475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.059505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.059843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.059872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.060248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.060278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.060647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.060676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.061084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.061114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.061473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.061503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.061837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.061867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.062149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.062187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.062518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.062547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.062759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.062787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.063105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.063133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.063490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.063519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.063742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.063770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.064145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.064181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.064561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.064590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.064927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.064955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.065334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.065364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.065619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.065652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.065873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.065903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.066144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.066181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.066425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.066453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.066850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.066878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.384 [2024-11-20 16:25:25.067122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.384 [2024-11-20 16:25:25.067150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.384 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.067526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.067555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.067915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.067943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.068341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.068370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.068739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.068768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.069129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.069180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.069457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.069485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.069837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.069866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.070072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.070100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.070359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.070390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.070733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.070762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.071157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.071199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.071529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.071557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.071930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.071958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.072321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.072351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.072589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.072618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.072999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.073027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.073420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.073451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.073821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.073850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.074213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.074243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.074467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.074495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.074776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.074806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.075178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.075208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.075561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.075592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.075947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.075976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.076298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.076328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.076701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.076729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.076998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.077027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.077243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.077273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.385 [2024-11-20 16:25:25.077652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.385 [2024-11-20 16:25:25.077681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.385 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.077889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.077918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.078286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.078318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.078696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.078724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.079082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.079112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.079472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.079502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.079741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.079775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.080148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.080185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.080527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.080558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.080943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.080971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.081335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.081364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.081744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.081773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.082138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.082182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.082426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.082455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.082704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.082733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.083090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.083118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.083491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.083520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.083878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.083906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.084256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.084286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.084511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.084539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.084943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.084973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.085346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.085376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.085744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.085773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.086151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.086189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.086541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.086570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.086917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.086946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.087249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.087278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.087513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.087544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.087909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.087937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.088092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.088121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.088510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.088540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.386 [2024-11-20 16:25:25.088923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.386 [2024-11-20 16:25:25.088952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.386 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.089227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.089257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.089689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.089718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.090069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.090098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.090472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.090504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.090848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.090877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.091090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.091119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.091529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.091559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.091908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.091937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.092319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.092349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.092710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.092738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.093110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.093140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.093492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.093521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.093895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.093923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.094294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.094324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.094695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.094729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.095055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.095084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.095457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.095487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.095713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.095744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.096131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.096173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.096525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.096554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.096921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.096949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.097313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.097366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.097740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.097769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.098136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.098172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.098411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.098440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.098773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.098803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.099195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.099227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.099586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.099617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.100006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.100035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.100316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.100346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.100725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.100753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.101022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.387 [2024-11-20 16:25:25.101049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.387 qpair failed and we were unable to recover it. 00:30:49.387 [2024-11-20 16:25:25.101458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.101488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.101848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.101877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.102234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.102263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.102626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.102655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.103007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.103035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.103281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.103311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.103563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.103591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.103932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.103961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.104233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.104264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.104652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.104682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.104918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.104949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.105240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.105271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.105636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.105665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.106014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.106042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.106381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.106412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.106763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.106791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.106995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.107023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.107355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.107384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.107710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.107738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.108092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.108119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.108460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.108490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.108714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.108741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.109024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.109059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.109451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.109480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.109844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.109871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.110237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.110266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.110711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.110739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.110949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.110977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.111327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.111357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.111584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.111613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.111955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.111985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.112345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.112376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.112638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.112667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.388 [2024-11-20 16:25:25.113068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.388 [2024-11-20 16:25:25.113097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.388 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.113322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.113353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.113711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.113741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.114120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.114150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.114513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.114544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.114907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.114937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.115281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.115313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.115675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.115705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.115940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.115971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.116321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.116350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.116718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.116748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.117101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.117131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.117377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.117408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.117622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.117652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.118000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.118029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.118279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.118310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.118672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.118704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.119061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.119092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.119416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.119449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.119689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.119722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.120079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.120109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.120486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.120518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.120864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.120894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.121241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.121273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.121651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.121682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.122026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.122056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.122330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.122362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.122746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.122778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.123112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.123143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.123506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.123544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.123952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.123982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.124197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.124230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.124591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.124622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.124985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.125014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.389 [2024-11-20 16:25:25.125387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.389 [2024-11-20 16:25:25.125417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.389 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.125782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.125810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.126188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.126219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.126489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.126517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.126870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.126899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.127262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.127292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.127507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.127536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.127766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.127795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.128146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.128193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.128309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.128338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.128613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.128643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.129010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.129040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.129269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.129300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.129651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.129680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.129915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.129944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.130198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.130229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.130550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.130580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.130946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.130975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.131200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.131231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.131584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.131614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.131844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.131873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.132095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.132123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.132367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.132397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.132667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.132699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.133081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.133110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.133383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.390 [2024-11-20 16:25:25.133413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.390 qpair failed and we were unable to recover it. 00:30:49.390 [2024-11-20 16:25:25.133767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.133796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.134035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.134063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.134403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.134434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.134804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.134835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.135098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.135126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.135364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.135395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.135641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.135670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.136048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.136077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.136453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.136482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.136847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.136882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.137142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.137180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.137475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.137504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.137746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.137774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.138024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.138054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.138323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.138354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.138721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.138750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.139120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.139150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.139517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.139546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.139915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.139946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.140176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.140207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.140598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.140628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.140996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.141026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.141223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.141255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.141614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.141643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.141977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.142005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.142263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.142292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.142628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.142658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.143086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.143115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.143496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.391 [2024-11-20 16:25:25.143526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.391 qpair failed and we were unable to recover it. 00:30:49.391 [2024-11-20 16:25:25.143869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.143898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.144221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.144255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.144589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.144617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.145001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.145030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.145383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.145415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.145788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.145818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.146028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.146057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.146431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.146462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.146847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.146877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.147237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.147267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.147623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.147652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.147864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.147893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.148260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.148290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.148666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.148695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.148933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.148962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.149314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.149346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.149714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.149744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.150102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.150130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.150344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.150374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.150611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.150641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.150906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.150943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.151308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.151340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.151604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.151634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.151994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.152023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.152290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.152320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.152747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.152776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.153148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.153193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.156609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.156714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.157188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.157227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.157570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.157601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.157795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.157825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.158184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.158216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.158561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.392 [2024-11-20 16:25:25.158592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.392 qpair failed and we were unable to recover it. 00:30:49.392 [2024-11-20 16:25:25.158818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.158847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.159067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.159098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.159479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.159513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.159898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.159927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.160171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.160201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.160448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.160477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.160724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.160754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.161151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.161195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.161450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.161480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.161750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.161782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.162141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.162180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.162536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.162568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.162943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.162971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.163106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.163134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.163433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.163465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.163843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.163874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.164227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.164258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.164477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.164507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.164886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.164916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.165271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.165301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.165635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.165665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.166035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.166065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.166413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.166445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.166807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.166836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.167097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.167131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.167382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.167412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.167777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.167805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.168138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.168183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.168537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.168569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.168965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.168995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.169336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.169367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.169764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.393 [2024-11-20 16:25:25.169794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.393 qpair failed and we were unable to recover it. 00:30:49.393 [2024-11-20 16:25:25.170154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.170201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.170568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.170596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.170955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.170985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.171345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.171377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.171619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.171647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.171880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.171911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.172277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.172307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.172532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.172560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.172914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.172942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.173288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.173320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.173556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.173585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.173824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.173852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.174214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.174245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.174608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.174636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.174851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.174880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.175215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.175245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.175606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.175634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.176010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.176039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.176228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.176258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.176495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.176523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.176856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.176885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.177135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.177170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.177538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.177567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.177797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.177826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.178177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.178209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.178618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.178647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.179037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.179066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.179320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.179350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.179751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.179779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.180038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.180067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.180298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.394 [2024-11-20 16:25:25.180327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.394 qpair failed and we were unable to recover it. 00:30:49.394 [2024-11-20 16:25:25.180743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.180771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.181157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.181195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.181446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.181474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.181686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.181715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.182032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.182067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.182409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.182439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.182809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.182839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.183100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.183133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.183511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.183544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.183933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.183963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.184234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.184264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.184616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.184646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.185015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.185045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.185426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.185455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.185825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.185855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.186224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.186253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.186642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.186671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.187030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.187059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.187396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.187429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.187779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.187807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.188022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.188051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.188429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.188459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.188827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.188856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.189231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.189260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.189669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.189697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.190071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.190099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.190320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.190350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.190581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.395 [2024-11-20 16:25:25.190610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.395 qpair failed and we were unable to recover it. 00:30:49.395 [2024-11-20 16:25:25.190953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.190981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.191191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.191221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.191565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.191593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.191949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.191980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.192222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.192253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.192495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.192523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.192909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.192937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.193216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.193246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.193506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.193534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.193943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.193980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.194324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.194353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.194714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.194744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.195100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.195129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.195518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.195549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.195768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.195797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.195890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.195918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.196282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.196317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.196677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.196707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.196842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.196874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.197093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.197122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.197358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.197387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.197653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.197681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.198070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.198098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.198504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.198533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.198906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.198935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.199197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.199227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.199604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.199633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.200014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.200043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.200267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.200298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.200663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.200692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.396 [2024-11-20 16:25:25.201054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.396 [2024-11-20 16:25:25.201083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.396 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.201503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.201533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.201756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.201784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.202109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.202138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.202358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.202388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.202507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.202535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f183c000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.202977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.203071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.203419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.203526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.203918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.203955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.204189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.204222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.204614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.204643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.205008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.205038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.205593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.205699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.206176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.206227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.206489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.206519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.206875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.206904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.207254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.207286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.207653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.207682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.208047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.208076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.208324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.208355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.208731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.208761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.209094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.209123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.209510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.209542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:49.397 [2024-11-20 16:25:25.209930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.209959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.397 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.210318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.210350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.397 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 [2024-11-20 16:25:25.210577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:49.397 [2024-11-20 16:25:25.210614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.397 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:49.397 [2024-11-20 16:25:25.210905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.397 [2024-11-20 16:25:25.210935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.397 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.211229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.211260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.211613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.211643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.211886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.211918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.212261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.212295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.212642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.212671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.212895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.212925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.213253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.213283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.213540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.213574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.213816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.213845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.214209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.214239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.214588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.214617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.214985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.215016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.215231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.215261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.215470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.215502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.215847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.215879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.216105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.216133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.216390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.216420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.216792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.216822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.217197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.217228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.217607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.217637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.217992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.218021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.218276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.218306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.218587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.218616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.218975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.219006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.219355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.219386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.219732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.219763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.220113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.220144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.220326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.220356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.220729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.220761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.221132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.221170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.398 [2024-11-20 16:25:25.221427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.398 [2024-11-20 16:25:25.221457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.398 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.221681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.221710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.221926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.221956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.222187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.222218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.222470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.222501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.222874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.222907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.223294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.223324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.223693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.223730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.224061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.224091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.224480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.224511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.224852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.224888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.225150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.225194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.225563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.225596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.225711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.225745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.226107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.226136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.226522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.226553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.226921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.226950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.227383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.227414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.227773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.227817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.228103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.228132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.228382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.228413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.228782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.228811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.229185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.229215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.229537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.229566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.229931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.229960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.230309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.230341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.230558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.230588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.230800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.230829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.231203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.231235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.231492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.231520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.231855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.231884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.232115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.232144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.232531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.399 [2024-11-20 16:25:25.232561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.399 qpair failed and we were unable to recover it. 00:30:49.399 [2024-11-20 16:25:25.232927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.232957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.233348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.233379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.233754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.233783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.234024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.234059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.234341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.234371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.234726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.234754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.234992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.235022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.235365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.235396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.235769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.235802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.236156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.236195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.236448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.236479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.236848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.236878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.237340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.237371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.237719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.237749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.238122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.238166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.238397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.238426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.238809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.238839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.239211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.239243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.239607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.239637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.239987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.240017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.240402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.240433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.240777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.240806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.241175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.241205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.241577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.241607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.241967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.241997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.242328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.242358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.242710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.242741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.242967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.242998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.243281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.243311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.243655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.243684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.244091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.400 [2024-11-20 16:25:25.244122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.400 qpair failed and we were unable to recover it. 00:30:49.400 [2024-11-20 16:25:25.244527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.244558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.244922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.244953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.245331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.245362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.245602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.245630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.245922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.245951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.246315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.246346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.246717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.246747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.247128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.247177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.247550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.247580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.247790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.247820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.248183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.248216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.248555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.248585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.248921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.248951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.249263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.249295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.249651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.249681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.250063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.250092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.250462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.250494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.250730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.250761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.251135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.251173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.251511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.251540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.251755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.251785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.252149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.252188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.252522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.252553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.252806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.252835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.253193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.253224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 [2024-11-20 16:25:25.253456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.253485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.401 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.401 [2024-11-20 16:25:25.253829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.401 [2024-11-20 16:25:25.253862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.401 qpair failed and we were unable to recover it. 00:30:49.402 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:49.402 [2024-11-20 16:25:25.254135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.254173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.402 [2024-11-20 16:25:25.254573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.254604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.254966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.254995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.255384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.255414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.255664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.255694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.256055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.256084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.256279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.256309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.256734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.256763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.257129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.257169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.257424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.257453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.257814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.257844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.258221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.258252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.258499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.258530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.258889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.258919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.259148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.259184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.259587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.259616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.259969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.259998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.260093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.260121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.260456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.260486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.260863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.260892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.261230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.261259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.261465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.261501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.261853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.261882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.262238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.262268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.262500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.262529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.262889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.262917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.263141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.402 [2024-11-20 16:25:25.263195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.402 qpair failed and we were unable to recover it. 00:30:49.402 [2024-11-20 16:25:25.263580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.263610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.263982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.264012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.264400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.264430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.264809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.264838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.265196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.265226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.265629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.265658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.265905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.265935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.266302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.266332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.266714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.266743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.267111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.267141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.267531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.267560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.267796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.267824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.268034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.268063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.268354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.268384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.268731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.268760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.269127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.269165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.269542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.269570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.269679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.269709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.270093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.270121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.270464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.270494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.270858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.270887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.271148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.271201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.271584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.271613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.271829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.271858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.272195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.272226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.272602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.272632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.272901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.272929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.273278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.273308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.273687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.273716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.274076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.274105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.274404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.274434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.403 qpair failed and we were unable to recover it. 00:30:49.403 [2024-11-20 16:25:25.274829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.403 [2024-11-20 16:25:25.274859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.275228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.275257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.275635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.275665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.276032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.276066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.276451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.276481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.276747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.276775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.277105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.277133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.277506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.277536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.277761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.277790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.278226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.278255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.278625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.278654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.278905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.278934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.279226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.279255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.279546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.279576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.279796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.279826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.280084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.280116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.280505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.280535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.280909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.280938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.281180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.281214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.281439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.281467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.281833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.281863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.282215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.282246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.282633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.282662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.283036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.283067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.283280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.283311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.283679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.283708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.284093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.284122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.284483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.284514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.284884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.284913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.285267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.285297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.404 [2024-11-20 16:25:25.285678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.404 [2024-11-20 16:25:25.285707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.404 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.285957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.285989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.286375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.286411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.286645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.286674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.286885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.286915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.287269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.287298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.287674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.287703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.288026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.288056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.288280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.288311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.288566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.288595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.288987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.289018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.289386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.289419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.289806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.289835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.290182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.290225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.290493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.290524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.290782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.290812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.291212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.291243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 Malloc0 00:30:49.405 [2024-11-20 16:25:25.291588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.291620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.291850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.291879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.292178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.292208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.292471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.292500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:49.405 [2024-11-20 16:25:25.292735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.292764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.405 [2024-11-20 16:25:25.293148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:49.405 [2024-11-20 16:25:25.293189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.293408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.293438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.293841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.293870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.294135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.294176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.294555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.294584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.294946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.405 [2024-11-20 16:25:25.294975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.405 qpair failed and we were unable to recover it. 00:30:49.405 [2024-11-20 16:25:25.295331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.406 [2024-11-20 16:25:25.295362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.406 qpair failed and we were unable to recover it. 00:30:49.406 [2024-11-20 16:25:25.295604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.406 [2024-11-20 16:25:25.295633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.406 qpair failed and we were unable to recover it. 00:30:49.406 [2024-11-20 16:25:25.295979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.406 [2024-11-20 16:25:25.296008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.406 qpair failed and we were unable to recover it. 00:30:49.406 [2024-11-20 16:25:25.296368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.406 [2024-11-20 16:25:25.296399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.406 qpair failed and we were unable to recover it. 00:30:49.406 [2024-11-20 16:25:25.296761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.406 [2024-11-20 16:25:25.296790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.406 qpair failed and we were unable to recover it. 00:30:49.406 [2024-11-20 16:25:25.297187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.406 [2024-11-20 16:25:25.297217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.406 qpair failed and we were unable to recover it. 00:30:49.406 [2024-11-20 16:25:25.297465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.406 [2024-11-20 16:25:25.297496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.406 qpair failed and we were unable to recover it. 00:30:49.406 [2024-11-20 16:25:25.297865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.406 [2024-11-20 16:25:25.297893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.406 qpair failed and we were unable to recover it. 00:30:49.406 [2024-11-20 16:25:25.298129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.406 [2024-11-20 16:25:25.298165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.406 qpair failed and we were unable to recover it. 00:30:49.406 [2024-11-20 16:25:25.298565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.406 [2024-11-20 16:25:25.298594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.406 qpair failed and we were unable to recover it. 00:30:49.406 [2024-11-20 16:25:25.298776] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.406 [2024-11-20 16:25:25.298863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.406 [2024-11-20 16:25:25.298892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.406 qpair failed and we were unable to recover it. 00:30:49.406 [2024-11-20 16:25:25.299239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.406 [2024-11-20 16:25:25.299270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.406 qpair failed and we were unable to recover it. 00:30:49.406 [2024-11-20 16:25:25.299543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.406 [2024-11-20 16:25:25.299572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.406 qpair failed and we were unable to recover it. 00:30:49.406 [2024-11-20 16:25:25.299911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.406 [2024-11-20 16:25:25.299941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.406 qpair failed and we were unable to recover it. 00:30:49.406 [2024-11-20 16:25:25.300303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.406 [2024-11-20 16:25:25.300333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.406 qpair failed and we were unable to recover it. 00:30:49.406 [2024-11-20 16:25:25.300552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.406 [2024-11-20 16:25:25.300581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.406 qpair failed and we were unable to recover it. 00:30:49.671 [2024-11-20 16:25:25.300695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.671 [2024-11-20 16:25:25.300726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.671 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.301089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.301119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.301383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.301412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.301793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.301822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.302053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.302081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.302464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.302494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.302874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.302903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.303283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.303321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.303552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.303584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.303958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.303986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.304236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.304265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.304527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.304555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.304806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.304834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.305238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.305293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.305420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.305449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.305811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.305840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.306230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.306260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.306625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.306655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.306870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.306898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.307190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.307220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.307610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.307640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.672 [2024-11-20 16:25:25.307864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.307903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.308124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.308153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:49.672 [2024-11-20 16:25:25.308546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.308576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:49.672 [2024-11-20 16:25:25.308944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.308973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.309342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.309372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.309736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.309765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.310138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.310192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.310530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.310560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.310924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.310953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.311068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.311096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.311331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.311363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.311745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.311774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.312151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.312188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.312557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.312586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.312938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.312966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.672 qpair failed and we were unable to recover it. 00:30:49.672 [2024-11-20 16:25:25.313205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.672 [2024-11-20 16:25:25.313235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.313360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.313389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.313800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.313829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.314185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.314216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.314449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.314477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.314909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.314938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.315298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.315329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.315563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.315591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.315960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.315990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.316229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.316265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.316492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.316520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.316897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.316925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.317153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.317190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.317587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.317615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.317987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.318016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.318372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.318402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.318658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.318689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.319050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.319079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.319296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.319326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.319723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.319752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.673 [2024-11-20 16:25:25.320044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.320073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:49.673 [2024-11-20 16:25:25.320507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.320537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.673 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:49.673 [2024-11-20 16:25:25.320897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.320928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.321192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.321222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.321624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.321653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.322028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.322057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.322277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.322306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.322553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.322581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.322961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.322989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.323373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.323405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.673 [2024-11-20 16:25:25.323653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.673 [2024-11-20 16:25:25.323684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.673 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.324033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.324063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.324297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.324327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.324688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.324717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.325066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.325102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.325334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.325364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.325734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.325764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.326022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.326053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.326442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.326473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.326687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.326717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.327102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.327133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.327494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.327524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.327752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.327785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.328043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.328073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.328445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.328476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.328852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.328882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.329131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.329174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.329454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.329485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.329856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.329886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.330252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.330282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.330659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.330688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.330917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.330947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.331050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.331081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.331488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.331518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.331814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.331844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.674 [2024-11-20 16:25:25.332216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.332248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:49.674 [2024-11-20 16:25:25.332646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.332676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:49.674 [2024-11-20 16:25:25.333042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.333073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.333208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.333239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.333649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.333680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.333932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.333961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.334234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.334265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.334657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.334687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.334902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.334932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.674 [2024-11-20 16:25:25.335291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.674 [2024-11-20 16:25:25.335321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.674 qpair failed and we were unable to recover it. 00:30:49.675 [2024-11-20 16:25:25.335695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.675 [2024-11-20 16:25:25.335725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.675 qpair failed and we were unable to recover it. 00:30:49.675 [2024-11-20 16:25:25.336002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.675 [2024-11-20 16:25:25.336032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.675 qpair failed and we were unable to recover it. 00:30:49.675 [2024-11-20 16:25:25.336426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.675 [2024-11-20 16:25:25.336457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.675 qpair failed and we were unable to recover it. 00:30:49.675 [2024-11-20 16:25:25.336876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.675 [2024-11-20 16:25:25.336905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.675 qpair failed and we were unable to recover it. 00:30:49.675 [2024-11-20 16:25:25.337270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.675 [2024-11-20 16:25:25.337301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.675 qpair failed and we were unable to recover it. 00:30:49.675 [2024-11-20 16:25:25.337525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.675 [2024-11-20 16:25:25.337554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.675 qpair failed and we were unable to recover it. 00:30:49.675 [2024-11-20 16:25:25.337798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.675 [2024-11-20 16:25:25.337828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.675 qpair failed and we were unable to recover it. 00:30:49.675 [2024-11-20 16:25:25.338219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.675 [2024-11-20 16:25:25.338257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.675 qpair failed and we were unable to recover it. 00:30:49.675 [2024-11-20 16:25:25.338521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.675 [2024-11-20 16:25:25.338550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.675 qpair failed and we were unable to recover it. 00:30:49.675 [2024-11-20 16:25:25.338927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.675 [2024-11-20 16:25:25.338956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1848000b90 with addr=10.0.0.2, port=4420 00:30:49.675 qpair failed and we were unable to recover it. 00:30:49.675 [2024-11-20 16:25:25.339140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.675 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.675 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:49.675 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.675 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:49.675 [2024-11-20 16:25:25.350092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.675 [2024-11-20 16:25:25.350265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.675 [2024-11-20 16:25:25.350316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.675 [2024-11-20 16:25:25.350341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.675 [2024-11-20 16:25:25.350361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.675 [2024-11-20 16:25:25.350418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.675 qpair failed and we were unable to recover it. 00:30:49.675 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.675 16:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1473058 00:30:49.675 [2024-11-20 16:25:25.359803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.675 [2024-11-20 16:25:25.359889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.675 [2024-11-20 16:25:25.359919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.675 [2024-11-20 16:25:25.359934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.675 [2024-11-20 16:25:25.359948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.675 [2024-11-20 16:25:25.359980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.675 qpair failed and we were unable to recover it. 00:30:49.675 [2024-11-20 16:25:25.369891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.675 [2024-11-20 16:25:25.369977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.675 [2024-11-20 16:25:25.369998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.675 [2024-11-20 16:25:25.370015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.675 [2024-11-20 16:25:25.370026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.675 [2024-11-20 16:25:25.370048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.675 qpair failed and we were unable to recover it. 00:30:49.675 [2024-11-20 16:25:25.379916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.675 [2024-11-20 16:25:25.379994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.675 [2024-11-20 16:25:25.380012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.675 [2024-11-20 16:25:25.380020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.675 [2024-11-20 16:25:25.380028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.675 [2024-11-20 16:25:25.380046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.675 qpair failed and we were unable to recover it. 00:30:49.675 [2024-11-20 16:25:25.389883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.675 [2024-11-20 16:25:25.389965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.675 [2024-11-20 16:25:25.389983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.675 [2024-11-20 16:25:25.389991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.675 [2024-11-20 16:25:25.389998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.675 [2024-11-20 16:25:25.390015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.675 qpair failed and we were unable to recover it. 00:30:49.675 [2024-11-20 16:25:25.399868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.675 [2024-11-20 16:25:25.399935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.675 [2024-11-20 16:25:25.399951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.675 [2024-11-20 16:25:25.399960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.675 [2024-11-20 16:25:25.399968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.675 [2024-11-20 16:25:25.399986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.675 qpair failed and we were unable to recover it. 00:30:49.675 [2024-11-20 16:25:25.409895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.675 [2024-11-20 16:25:25.409967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.675 [2024-11-20 16:25:25.409986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.675 [2024-11-20 16:25:25.409994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.675 [2024-11-20 16:25:25.410002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.675 [2024-11-20 16:25:25.410026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.675 qpair failed and we were unable to recover it. 00:30:49.675 [2024-11-20 16:25:25.419906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.675 [2024-11-20 16:25:25.419974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.675 [2024-11-20 16:25:25.419991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.675 [2024-11-20 16:25:25.419998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.675 [2024-11-20 16:25:25.420005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.675 [2024-11-20 16:25:25.420022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.675 qpair failed and we were unable to recover it. 00:30:49.675 [2024-11-20 16:25:25.429907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.675 [2024-11-20 16:25:25.430031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.675 [2024-11-20 16:25:25.430052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.675 [2024-11-20 16:25:25.430060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.676 [2024-11-20 16:25:25.430067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.676 [2024-11-20 16:25:25.430085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.676 qpair failed and we were unable to recover it. 00:30:49.676 [2024-11-20 16:25:25.439997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.676 [2024-11-20 16:25:25.440069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.676 [2024-11-20 16:25:25.440087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.676 [2024-11-20 16:25:25.440095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.676 [2024-11-20 16:25:25.440101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.676 [2024-11-20 16:25:25.440118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.676 qpair failed and we were unable to recover it. 00:30:49.676 [2024-11-20 16:25:25.450027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.676 [2024-11-20 16:25:25.450095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.676 [2024-11-20 16:25:25.450112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.676 [2024-11-20 16:25:25.450119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.676 [2024-11-20 16:25:25.450126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.676 [2024-11-20 16:25:25.450143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.676 qpair failed and we were unable to recover it. 00:30:49.676 [2024-11-20 16:25:25.460024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.676 [2024-11-20 16:25:25.460087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.676 [2024-11-20 16:25:25.460104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.676 [2024-11-20 16:25:25.460111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.676 [2024-11-20 16:25:25.460118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.676 [2024-11-20 16:25:25.460134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.676 qpair failed and we were unable to recover it. 00:30:49.676 [2024-11-20 16:25:25.470093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.676 [2024-11-20 16:25:25.470194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.676 [2024-11-20 16:25:25.470211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.676 [2024-11-20 16:25:25.470219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.676 [2024-11-20 16:25:25.470225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.676 [2024-11-20 16:25:25.470242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.676 qpair failed and we were unable to recover it. 00:30:49.676 [2024-11-20 16:25:25.480026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.676 [2024-11-20 16:25:25.480104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.676 [2024-11-20 16:25:25.480120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.676 [2024-11-20 16:25:25.480128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.676 [2024-11-20 16:25:25.480134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.676 [2024-11-20 16:25:25.480151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.676 qpair failed and we were unable to recover it. 00:30:49.676 [2024-11-20 16:25:25.490118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.676 [2024-11-20 16:25:25.490192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.676 [2024-11-20 16:25:25.490208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.676 [2024-11-20 16:25:25.490215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.676 [2024-11-20 16:25:25.490222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.676 [2024-11-20 16:25:25.490238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.676 qpair failed and we were unable to recover it. 00:30:49.676 [2024-11-20 16:25:25.500137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.676 [2024-11-20 16:25:25.500221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.676 [2024-11-20 16:25:25.500246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.676 [2024-11-20 16:25:25.500254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.676 [2024-11-20 16:25:25.500260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.676 [2024-11-20 16:25:25.500277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.676 qpair failed and we were unable to recover it. 00:30:49.676 [2024-11-20 16:25:25.510226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.676 [2024-11-20 16:25:25.510301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.676 [2024-11-20 16:25:25.510318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.676 [2024-11-20 16:25:25.510325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.676 [2024-11-20 16:25:25.510332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.676 [2024-11-20 16:25:25.510348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.676 qpair failed and we were unable to recover it. 00:30:49.676 [2024-11-20 16:25:25.520113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.676 [2024-11-20 16:25:25.520186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.676 [2024-11-20 16:25:25.520203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.676 [2024-11-20 16:25:25.520210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.676 [2024-11-20 16:25:25.520216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.676 [2024-11-20 16:25:25.520233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.676 qpair failed and we were unable to recover it. 00:30:49.676 [2024-11-20 16:25:25.530237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.676 [2024-11-20 16:25:25.530296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.676 [2024-11-20 16:25:25.530314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.676 [2024-11-20 16:25:25.530322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.676 [2024-11-20 16:25:25.530328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.676 [2024-11-20 16:25:25.530345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.676 qpair failed and we were unable to recover it. 00:30:49.676 [2024-11-20 16:25:25.540272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.676 [2024-11-20 16:25:25.540337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.677 [2024-11-20 16:25:25.540354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.677 [2024-11-20 16:25:25.540361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.677 [2024-11-20 16:25:25.540373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.677 [2024-11-20 16:25:25.540391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.677 qpair failed and we were unable to recover it. 00:30:49.677 [2024-11-20 16:25:25.550515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.677 [2024-11-20 16:25:25.550589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.677 [2024-11-20 16:25:25.550605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.677 [2024-11-20 16:25:25.550612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.677 [2024-11-20 16:25:25.550619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.677 [2024-11-20 16:25:25.550635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.677 qpair failed and we were unable to recover it. 00:30:49.677 [2024-11-20 16:25:25.560385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.677 [2024-11-20 16:25:25.560453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.677 [2024-11-20 16:25:25.560468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.677 [2024-11-20 16:25:25.560475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.677 [2024-11-20 16:25:25.560482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.677 [2024-11-20 16:25:25.560498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.677 qpair failed and we were unable to recover it. 00:30:49.677 [2024-11-20 16:25:25.570319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.677 [2024-11-20 16:25:25.570383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.677 [2024-11-20 16:25:25.570398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.677 [2024-11-20 16:25:25.570406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.677 [2024-11-20 16:25:25.570412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.677 [2024-11-20 16:25:25.570429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.677 qpair failed and we were unable to recover it. 00:30:49.677 [2024-11-20 16:25:25.580466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.677 [2024-11-20 16:25:25.580538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.677 [2024-11-20 16:25:25.580553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.677 [2024-11-20 16:25:25.580560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.677 [2024-11-20 16:25:25.580567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.677 [2024-11-20 16:25:25.580584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.677 qpair failed and we were unable to recover it. 00:30:49.677 [2024-11-20 16:25:25.590485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.677 [2024-11-20 16:25:25.590565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.677 [2024-11-20 16:25:25.590582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.677 [2024-11-20 16:25:25.590590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.677 [2024-11-20 16:25:25.590597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.677 [2024-11-20 16:25:25.590614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.677 qpair failed and we were unable to recover it. 00:30:49.677 [2024-11-20 16:25:25.600461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.677 [2024-11-20 16:25:25.600525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.677 [2024-11-20 16:25:25.600542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.677 [2024-11-20 16:25:25.600549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.677 [2024-11-20 16:25:25.600556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.677 [2024-11-20 16:25:25.600572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.677 qpair failed and we were unable to recover it. 00:30:49.940 [2024-11-20 16:25:25.610387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.940 [2024-11-20 16:25:25.610452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.940 [2024-11-20 16:25:25.610473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.940 [2024-11-20 16:25:25.610480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.940 [2024-11-20 16:25:25.610487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.940 [2024-11-20 16:25:25.610505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.940 qpair failed and we were unable to recover it. 00:30:49.940 [2024-11-20 16:25:25.620433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.940 [2024-11-20 16:25:25.620499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.940 [2024-11-20 16:25:25.620517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.940 [2024-11-20 16:25:25.620525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.940 [2024-11-20 16:25:25.620534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.940 [2024-11-20 16:25:25.620551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.940 qpair failed and we were unable to recover it. 00:30:49.940 [2024-11-20 16:25:25.630616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.940 [2024-11-20 16:25:25.630686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.940 [2024-11-20 16:25:25.630708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.940 [2024-11-20 16:25:25.630715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.940 [2024-11-20 16:25:25.630721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.940 [2024-11-20 16:25:25.630737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.940 qpair failed and we were unable to recover it. 00:30:49.940 [2024-11-20 16:25:25.640585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.940 [2024-11-20 16:25:25.640649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.940 [2024-11-20 16:25:25.640666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.940 [2024-11-20 16:25:25.640673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.940 [2024-11-20 16:25:25.640680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.940 [2024-11-20 16:25:25.640696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.940 qpair failed and we were unable to recover it. 00:30:49.940 [2024-11-20 16:25:25.650630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.940 [2024-11-20 16:25:25.650697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.940 [2024-11-20 16:25:25.650714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.940 [2024-11-20 16:25:25.650722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.940 [2024-11-20 16:25:25.650729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.940 [2024-11-20 16:25:25.650745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.940 qpair failed and we were unable to recover it. 00:30:49.940 [2024-11-20 16:25:25.660640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.940 [2024-11-20 16:25:25.660709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.940 [2024-11-20 16:25:25.660725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.940 [2024-11-20 16:25:25.660733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.940 [2024-11-20 16:25:25.660739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.940 [2024-11-20 16:25:25.660755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.940 qpair failed and we were unable to recover it. 00:30:49.940 [2024-11-20 16:25:25.670681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.940 [2024-11-20 16:25:25.670760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.940 [2024-11-20 16:25:25.670776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.940 [2024-11-20 16:25:25.670784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.940 [2024-11-20 16:25:25.670798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.940 [2024-11-20 16:25:25.670815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.940 qpair failed and we were unable to recover it. 00:30:49.940 [2024-11-20 16:25:25.680688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.940 [2024-11-20 16:25:25.680753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.940 [2024-11-20 16:25:25.680771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.941 [2024-11-20 16:25:25.680778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.941 [2024-11-20 16:25:25.680784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.941 [2024-11-20 16:25:25.680800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.941 qpair failed and we were unable to recover it. 00:30:49.941 [2024-11-20 16:25:25.690613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.941 [2024-11-20 16:25:25.690715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.941 [2024-11-20 16:25:25.690732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.941 [2024-11-20 16:25:25.690740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.941 [2024-11-20 16:25:25.690746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.941 [2024-11-20 16:25:25.690763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.941 qpair failed and we were unable to recover it. 00:30:49.941 [2024-11-20 16:25:25.700732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.941 [2024-11-20 16:25:25.700799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.941 [2024-11-20 16:25:25.700816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.941 [2024-11-20 16:25:25.700823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.941 [2024-11-20 16:25:25.700829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.941 [2024-11-20 16:25:25.700846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.941 qpair failed and we were unable to recover it. 00:30:49.941 [2024-11-20 16:25:25.710826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.941 [2024-11-20 16:25:25.710944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.941 [2024-11-20 16:25:25.710961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.941 [2024-11-20 16:25:25.710969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.941 [2024-11-20 16:25:25.710976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.941 [2024-11-20 16:25:25.710993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.941 qpair failed and we were unable to recover it. 00:30:49.941 [2024-11-20 16:25:25.720817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.941 [2024-11-20 16:25:25.720904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.941 [2024-11-20 16:25:25.720941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.941 [2024-11-20 16:25:25.720951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.941 [2024-11-20 16:25:25.720958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.941 [2024-11-20 16:25:25.720983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.941 qpair failed and we were unable to recover it. 00:30:49.941 [2024-11-20 16:25:25.730882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.941 [2024-11-20 16:25:25.730960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.941 [2024-11-20 16:25:25.730996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.941 [2024-11-20 16:25:25.731006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.941 [2024-11-20 16:25:25.731013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.941 [2024-11-20 16:25:25.731038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.941 qpair failed and we were unable to recover it. 00:30:49.941 [2024-11-20 16:25:25.740751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.941 [2024-11-20 16:25:25.740820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.941 [2024-11-20 16:25:25.740839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.941 [2024-11-20 16:25:25.740847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.941 [2024-11-20 16:25:25.740854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.941 [2024-11-20 16:25:25.740872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.941 qpair failed and we were unable to recover it. 00:30:49.941 [2024-11-20 16:25:25.750945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.941 [2024-11-20 16:25:25.751024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.941 [2024-11-20 16:25:25.751041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.941 [2024-11-20 16:25:25.751048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.941 [2024-11-20 16:25:25.751055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.941 [2024-11-20 16:25:25.751072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.941 qpair failed and we were unable to recover it. 00:30:49.941 [2024-11-20 16:25:25.760956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.941 [2024-11-20 16:25:25.761024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.941 [2024-11-20 16:25:25.761047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.941 [2024-11-20 16:25:25.761054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.941 [2024-11-20 16:25:25.761061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.941 [2024-11-20 16:25:25.761078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.941 qpair failed and we were unable to recover it. 00:30:49.941 [2024-11-20 16:25:25.770976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.941 [2024-11-20 16:25:25.771046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.941 [2024-11-20 16:25:25.771063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.941 [2024-11-20 16:25:25.771070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.941 [2024-11-20 16:25:25.771077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.941 [2024-11-20 16:25:25.771094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.941 qpair failed and we were unable to recover it. 00:30:49.941 [2024-11-20 16:25:25.781034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.941 [2024-11-20 16:25:25.781116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.941 [2024-11-20 16:25:25.781133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.941 [2024-11-20 16:25:25.781140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.941 [2024-11-20 16:25:25.781147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.941 [2024-11-20 16:25:25.781174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.941 qpair failed and we were unable to recover it. 00:30:49.941 [2024-11-20 16:25:25.791082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.941 [2024-11-20 16:25:25.791146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.941 [2024-11-20 16:25:25.791170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.941 [2024-11-20 16:25:25.791178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.941 [2024-11-20 16:25:25.791184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.941 [2024-11-20 16:25:25.791201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.941 qpair failed and we were unable to recover it. 00:30:49.941 [2024-11-20 16:25:25.801063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.941 [2024-11-20 16:25:25.801134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.941 [2024-11-20 16:25:25.801150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.941 [2024-11-20 16:25:25.801169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.941 [2024-11-20 16:25:25.801175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.941 [2024-11-20 16:25:25.801193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.941 qpair failed and we were unable to recover it. 00:30:49.941 [2024-11-20 16:25:25.811103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.941 [2024-11-20 16:25:25.811172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.941 [2024-11-20 16:25:25.811189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.941 [2024-11-20 16:25:25.811196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.942 [2024-11-20 16:25:25.811203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.942 [2024-11-20 16:25:25.811219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.942 qpair failed and we were unable to recover it. 00:30:49.942 [2024-11-20 16:25:25.821108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.942 [2024-11-20 16:25:25.821180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.942 [2024-11-20 16:25:25.821198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.942 [2024-11-20 16:25:25.821205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.942 [2024-11-20 16:25:25.821212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.942 [2024-11-20 16:25:25.821229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.942 qpair failed and we were unable to recover it. 00:30:49.942 [2024-11-20 16:25:25.831191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.942 [2024-11-20 16:25:25.831259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.942 [2024-11-20 16:25:25.831275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.942 [2024-11-20 16:25:25.831283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.942 [2024-11-20 16:25:25.831289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.942 [2024-11-20 16:25:25.831306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.942 qpair failed and we were unable to recover it. 00:30:49.942 [2024-11-20 16:25:25.841217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.942 [2024-11-20 16:25:25.841301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.942 [2024-11-20 16:25:25.841318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.942 [2024-11-20 16:25:25.841325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.942 [2024-11-20 16:25:25.841332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.942 [2024-11-20 16:25:25.841357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.942 qpair failed and we were unable to recover it. 00:30:49.942 [2024-11-20 16:25:25.851216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.942 [2024-11-20 16:25:25.851280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.942 [2024-11-20 16:25:25.851296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.942 [2024-11-20 16:25:25.851303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.942 [2024-11-20 16:25:25.851310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.942 [2024-11-20 16:25:25.851327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.942 qpair failed and we were unable to recover it. 00:30:49.942 [2024-11-20 16:25:25.861111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.942 [2024-11-20 16:25:25.861181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.942 [2024-11-20 16:25:25.861198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.942 [2024-11-20 16:25:25.861205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.942 [2024-11-20 16:25:25.861211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.942 [2024-11-20 16:25:25.861228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.942 qpair failed and we were unable to recover it. 00:30:49.942 [2024-11-20 16:25:25.871298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.942 [2024-11-20 16:25:25.871367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.942 [2024-11-20 16:25:25.871384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.942 [2024-11-20 16:25:25.871391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.942 [2024-11-20 16:25:25.871398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:49.942 [2024-11-20 16:25:25.871415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.942 qpair failed and we were unable to recover it. 00:30:50.205 [2024-11-20 16:25:25.881299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.205 [2024-11-20 16:25:25.881363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.205 [2024-11-20 16:25:25.881380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.205 [2024-11-20 16:25:25.881387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.205 [2024-11-20 16:25:25.881394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.205 [2024-11-20 16:25:25.881411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-11-20 16:25:25.891336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.205 [2024-11-20 16:25:25.891410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.205 [2024-11-20 16:25:25.891428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.205 [2024-11-20 16:25:25.891435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.205 [2024-11-20 16:25:25.891442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.205 [2024-11-20 16:25:25.891459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-11-20 16:25:25.901350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.205 [2024-11-20 16:25:25.901421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.205 [2024-11-20 16:25:25.901438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.205 [2024-11-20 16:25:25.901445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.205 [2024-11-20 16:25:25.901451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.205 [2024-11-20 16:25:25.901468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-11-20 16:25:25.911415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.205 [2024-11-20 16:25:25.911527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.205 [2024-11-20 16:25:25.911543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.205 [2024-11-20 16:25:25.911550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.205 [2024-11-20 16:25:25.911557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.205 [2024-11-20 16:25:25.911574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-11-20 16:25:25.921431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.205 [2024-11-20 16:25:25.921505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.205 [2024-11-20 16:25:25.921522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.205 [2024-11-20 16:25:25.921529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.205 [2024-11-20 16:25:25.921536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.205 [2024-11-20 16:25:25.921553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-11-20 16:25:25.931424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.205 [2024-11-20 16:25:25.931490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.205 [2024-11-20 16:25:25.931506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.205 [2024-11-20 16:25:25.931519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.205 [2024-11-20 16:25:25.931526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.205 [2024-11-20 16:25:25.931543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-11-20 16:25:25.941495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.205 [2024-11-20 16:25:25.941565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.205 [2024-11-20 16:25:25.941581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.205 [2024-11-20 16:25:25.941588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.205 [2024-11-20 16:25:25.941595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.205 [2024-11-20 16:25:25.941612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-11-20 16:25:25.951435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.205 [2024-11-20 16:25:25.951509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.205 [2024-11-20 16:25:25.951524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.205 [2024-11-20 16:25:25.951532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.205 [2024-11-20 16:25:25.951538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.205 [2024-11-20 16:25:25.951555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-11-20 16:25:25.961552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.205 [2024-11-20 16:25:25.961617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.205 [2024-11-20 16:25:25.961633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.205 [2024-11-20 16:25:25.961640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.205 [2024-11-20 16:25:25.961648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.205 [2024-11-20 16:25:25.961665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-11-20 16:25:25.971604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.205 [2024-11-20 16:25:25.971714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.205 [2024-11-20 16:25:25.971731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.206 [2024-11-20 16:25:25.971740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.206 [2024-11-20 16:25:25.971747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.206 [2024-11-20 16:25:25.971770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-11-20 16:25:25.981631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.206 [2024-11-20 16:25:25.981778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.206 [2024-11-20 16:25:25.981795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.206 [2024-11-20 16:25:25.981802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.206 [2024-11-20 16:25:25.981809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.206 [2024-11-20 16:25:25.981826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-11-20 16:25:25.991670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.206 [2024-11-20 16:25:25.991745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.206 [2024-11-20 16:25:25.991763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.206 [2024-11-20 16:25:25.991770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.206 [2024-11-20 16:25:25.991776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.206 [2024-11-20 16:25:25.991794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-11-20 16:25:26.001671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.206 [2024-11-20 16:25:26.001733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.206 [2024-11-20 16:25:26.001750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.206 [2024-11-20 16:25:26.001757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.206 [2024-11-20 16:25:26.001764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.206 [2024-11-20 16:25:26.001780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-11-20 16:25:26.011676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.206 [2024-11-20 16:25:26.011736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.206 [2024-11-20 16:25:26.011753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.206 [2024-11-20 16:25:26.011760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.206 [2024-11-20 16:25:26.011767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.206 [2024-11-20 16:25:26.011784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-11-20 16:25:26.021726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.206 [2024-11-20 16:25:26.021805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.206 [2024-11-20 16:25:26.021821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.206 [2024-11-20 16:25:26.021829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.206 [2024-11-20 16:25:26.021836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.206 [2024-11-20 16:25:26.021853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-11-20 16:25:26.031678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.206 [2024-11-20 16:25:26.031763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.206 [2024-11-20 16:25:26.031784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.206 [2024-11-20 16:25:26.031792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.206 [2024-11-20 16:25:26.031799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.206 [2024-11-20 16:25:26.031817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-11-20 16:25:26.041795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.206 [2024-11-20 16:25:26.041856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.206 [2024-11-20 16:25:26.041874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.206 [2024-11-20 16:25:26.041881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.206 [2024-11-20 16:25:26.041888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.206 [2024-11-20 16:25:26.041905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-11-20 16:25:26.051691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.206 [2024-11-20 16:25:26.051760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.206 [2024-11-20 16:25:26.051778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.206 [2024-11-20 16:25:26.051785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.206 [2024-11-20 16:25:26.051792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.206 [2024-11-20 16:25:26.051809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-11-20 16:25:26.061869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.206 [2024-11-20 16:25:26.061939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.206 [2024-11-20 16:25:26.061972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.206 [2024-11-20 16:25:26.061980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.206 [2024-11-20 16:25:26.061986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.206 [2024-11-20 16:25:26.062006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-11-20 16:25:26.071895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.206 [2024-11-20 16:25:26.071969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.206 [2024-11-20 16:25:26.072005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.206 [2024-11-20 16:25:26.072015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.206 [2024-11-20 16:25:26.072022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.206 [2024-11-20 16:25:26.072046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-11-20 16:25:26.081933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.206 [2024-11-20 16:25:26.081995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.206 [2024-11-20 16:25:26.082015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.206 [2024-11-20 16:25:26.082023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.206 [2024-11-20 16:25:26.082030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.206 [2024-11-20 16:25:26.082049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-11-20 16:25:26.091938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.206 [2024-11-20 16:25:26.092006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.206 [2024-11-20 16:25:26.092023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.206 [2024-11-20 16:25:26.092031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.207 [2024-11-20 16:25:26.092037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.207 [2024-11-20 16:25:26.092055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-11-20 16:25:26.101972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.207 [2024-11-20 16:25:26.102040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.207 [2024-11-20 16:25:26.102058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.207 [2024-11-20 16:25:26.102066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.207 [2024-11-20 16:25:26.102078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.207 [2024-11-20 16:25:26.102097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-11-20 16:25:26.112092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.207 [2024-11-20 16:25:26.112186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.207 [2024-11-20 16:25:26.112217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.207 [2024-11-20 16:25:26.112226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.207 [2024-11-20 16:25:26.112233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.207 [2024-11-20 16:25:26.112257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-11-20 16:25:26.122030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.207 [2024-11-20 16:25:26.122094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.207 [2024-11-20 16:25:26.122113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.207 [2024-11-20 16:25:26.122121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.207 [2024-11-20 16:25:26.122127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.207 [2024-11-20 16:25:26.122145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-11-20 16:25:26.132050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.207 [2024-11-20 16:25:26.132156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.207 [2024-11-20 16:25:26.132182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.207 [2024-11-20 16:25:26.132189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.207 [2024-11-20 16:25:26.132196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.207 [2024-11-20 16:25:26.132213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 16:25:26.142087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.470 [2024-11-20 16:25:26.142169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.470 [2024-11-20 16:25:26.142187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.470 [2024-11-20 16:25:26.142194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.470 [2024-11-20 16:25:26.142201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.470 [2024-11-20 16:25:26.142218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 16:25:26.152129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.470 [2024-11-20 16:25:26.152202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.470 [2024-11-20 16:25:26.152220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.470 [2024-11-20 16:25:26.152227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.470 [2024-11-20 16:25:26.152233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.470 [2024-11-20 16:25:26.152251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 16:25:26.162153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.470 [2024-11-20 16:25:26.162230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.470 [2024-11-20 16:25:26.162247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.470 [2024-11-20 16:25:26.162254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.470 [2024-11-20 16:25:26.162260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.470 [2024-11-20 16:25:26.162278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 16:25:26.172185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.470 [2024-11-20 16:25:26.172254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.470 [2024-11-20 16:25:26.172270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.470 [2024-11-20 16:25:26.172278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.470 [2024-11-20 16:25:26.172284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.470 [2024-11-20 16:25:26.172300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 16:25:26.182228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.470 [2024-11-20 16:25:26.182296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.470 [2024-11-20 16:25:26.182312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.470 [2024-11-20 16:25:26.182319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.470 [2024-11-20 16:25:26.182326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.470 [2024-11-20 16:25:26.182343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 16:25:26.192240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.470 [2024-11-20 16:25:26.192303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.470 [2024-11-20 16:25:26.192331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.470 [2024-11-20 16:25:26.192339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.470 [2024-11-20 16:25:26.192345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.470 [2024-11-20 16:25:26.192362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 16:25:26.202269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.470 [2024-11-20 16:25:26.202336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.470 [2024-11-20 16:25:26.202354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.470 [2024-11-20 16:25:26.202361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.470 [2024-11-20 16:25:26.202368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.470 [2024-11-20 16:25:26.202385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 16:25:26.212300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.470 [2024-11-20 16:25:26.212399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.470 [2024-11-20 16:25:26.212415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.470 [2024-11-20 16:25:26.212423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.470 [2024-11-20 16:25:26.212430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.470 [2024-11-20 16:25:26.212447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 16:25:26.222328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.470 [2024-11-20 16:25:26.222393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.470 [2024-11-20 16:25:26.222410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.470 [2024-11-20 16:25:26.222418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.470 [2024-11-20 16:25:26.222425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.470 [2024-11-20 16:25:26.222442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 16:25:26.232380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.470 [2024-11-20 16:25:26.232500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.471 [2024-11-20 16:25:26.232517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.471 [2024-11-20 16:25:26.232525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.471 [2024-11-20 16:25:26.232536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.471 [2024-11-20 16:25:26.232553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 16:25:26.242425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.471 [2024-11-20 16:25:26.242497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.471 [2024-11-20 16:25:26.242514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.471 [2024-11-20 16:25:26.242521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.471 [2024-11-20 16:25:26.242527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.471 [2024-11-20 16:25:26.242544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 16:25:26.252490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.471 [2024-11-20 16:25:26.252564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.471 [2024-11-20 16:25:26.252581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.471 [2024-11-20 16:25:26.252589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.471 [2024-11-20 16:25:26.252595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.471 [2024-11-20 16:25:26.252612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 16:25:26.262362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.471 [2024-11-20 16:25:26.262428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.471 [2024-11-20 16:25:26.262443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.471 [2024-11-20 16:25:26.262450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.471 [2024-11-20 16:25:26.262457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.471 [2024-11-20 16:25:26.262473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 16:25:26.272520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.471 [2024-11-20 16:25:26.272594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.471 [2024-11-20 16:25:26.272610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.471 [2024-11-20 16:25:26.272617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.471 [2024-11-20 16:25:26.272623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.471 [2024-11-20 16:25:26.272639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 16:25:26.282511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.471 [2024-11-20 16:25:26.282576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.471 [2024-11-20 16:25:26.282592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.471 [2024-11-20 16:25:26.282600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.471 [2024-11-20 16:25:26.282606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.471 [2024-11-20 16:25:26.282623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 16:25:26.292542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.471 [2024-11-20 16:25:26.292645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.471 [2024-11-20 16:25:26.292661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.471 [2024-11-20 16:25:26.292669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.471 [2024-11-20 16:25:26.292676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.471 [2024-11-20 16:25:26.292694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 16:25:26.302542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.471 [2024-11-20 16:25:26.302609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.471 [2024-11-20 16:25:26.302626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.471 [2024-11-20 16:25:26.302633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.471 [2024-11-20 16:25:26.302640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.471 [2024-11-20 16:25:26.302656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 16:25:26.312624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.471 [2024-11-20 16:25:26.312689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.471 [2024-11-20 16:25:26.312705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.471 [2024-11-20 16:25:26.312712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.471 [2024-11-20 16:25:26.312719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.471 [2024-11-20 16:25:26.312735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 16:25:26.322545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.471 [2024-11-20 16:25:26.322611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.471 [2024-11-20 16:25:26.322632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.471 [2024-11-20 16:25:26.322640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.471 [2024-11-20 16:25:26.322646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.471 [2024-11-20 16:25:26.322663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 16:25:26.332648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.471 [2024-11-20 16:25:26.332718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.471 [2024-11-20 16:25:26.332735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.471 [2024-11-20 16:25:26.332742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.471 [2024-11-20 16:25:26.332749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.471 [2024-11-20 16:25:26.332765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 16:25:26.342703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.471 [2024-11-20 16:25:26.342767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.471 [2024-11-20 16:25:26.342783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.471 [2024-11-20 16:25:26.342791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.471 [2024-11-20 16:25:26.342798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.471 [2024-11-20 16:25:26.342814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 16:25:26.352738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.471 [2024-11-20 16:25:26.352803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.471 [2024-11-20 16:25:26.352820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.471 [2024-11-20 16:25:26.352827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.471 [2024-11-20 16:25:26.352834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.471 [2024-11-20 16:25:26.352851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 16:25:26.362654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.471 [2024-11-20 16:25:26.362721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.472 [2024-11-20 16:25:26.362738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.472 [2024-11-20 16:25:26.362751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.472 [2024-11-20 16:25:26.362757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.472 [2024-11-20 16:25:26.362773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 16:25:26.372664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.472 [2024-11-20 16:25:26.372724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.472 [2024-11-20 16:25:26.372741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.472 [2024-11-20 16:25:26.372748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.472 [2024-11-20 16:25:26.372755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.472 [2024-11-20 16:25:26.372772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 16:25:26.382855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.472 [2024-11-20 16:25:26.382923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.472 [2024-11-20 16:25:26.382939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.472 [2024-11-20 16:25:26.382947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.472 [2024-11-20 16:25:26.382953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.472 [2024-11-20 16:25:26.382970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 16:25:26.392905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.472 [2024-11-20 16:25:26.392993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.472 [2024-11-20 16:25:26.393029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.472 [2024-11-20 16:25:26.393039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.472 [2024-11-20 16:25:26.393046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.472 [2024-11-20 16:25:26.393071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.734 [2024-11-20 16:25:26.402894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.734 [2024-11-20 16:25:26.402963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.734 [2024-11-20 16:25:26.402984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.734 [2024-11-20 16:25:26.402993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.734 [2024-11-20 16:25:26.403000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.734 [2024-11-20 16:25:26.403027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.734 qpair failed and we were unable to recover it. 00:30:50.734 [2024-11-20 16:25:26.412815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.734 [2024-11-20 16:25:26.412878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.734 [2024-11-20 16:25:26.412896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.734 [2024-11-20 16:25:26.412903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.734 [2024-11-20 16:25:26.412910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.734 [2024-11-20 16:25:26.412927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.734 qpair failed and we were unable to recover it. 00:30:50.734 [2024-11-20 16:25:26.422950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.734 [2024-11-20 16:25:26.423026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.734 [2024-11-20 16:25:26.423043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.734 [2024-11-20 16:25:26.423051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.734 [2024-11-20 16:25:26.423057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.734 [2024-11-20 16:25:26.423074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.734 qpair failed and we were unable to recover it. 00:30:50.734 [2024-11-20 16:25:26.433006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.734 [2024-11-20 16:25:26.433085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.734 [2024-11-20 16:25:26.433102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.734 [2024-11-20 16:25:26.433110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.734 [2024-11-20 16:25:26.433116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.734 [2024-11-20 16:25:26.433133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.734 qpair failed and we were unable to recover it. 00:30:50.734 [2024-11-20 16:25:26.443002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.734 [2024-11-20 16:25:26.443063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.734 [2024-11-20 16:25:26.443080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.734 [2024-11-20 16:25:26.443087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.734 [2024-11-20 16:25:26.443094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.734 [2024-11-20 16:25:26.443110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.734 qpair failed and we were unable to recover it. 00:30:50.734 [2024-11-20 16:25:26.453015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.734 [2024-11-20 16:25:26.453079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.734 [2024-11-20 16:25:26.453097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.734 [2024-11-20 16:25:26.453104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.734 [2024-11-20 16:25:26.453111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.734 [2024-11-20 16:25:26.453128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.735 qpair failed and we were unable to recover it. 00:30:50.735 [2024-11-20 16:25:26.462982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.735 [2024-11-20 16:25:26.463044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.735 [2024-11-20 16:25:26.463066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.735 [2024-11-20 16:25:26.463074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.735 [2024-11-20 16:25:26.463081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.735 [2024-11-20 16:25:26.463100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.735 qpair failed and we were unable to recover it. 00:30:50.735 [2024-11-20 16:25:26.473107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.735 [2024-11-20 16:25:26.473177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.735 [2024-11-20 16:25:26.473196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.735 [2024-11-20 16:25:26.473204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.735 [2024-11-20 16:25:26.473211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.735 [2024-11-20 16:25:26.473230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.735 qpair failed and we were unable to recover it. 00:30:50.735 [2024-11-20 16:25:26.483114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.735 [2024-11-20 16:25:26.483185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.735 [2024-11-20 16:25:26.483202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.735 [2024-11-20 16:25:26.483211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.735 [2024-11-20 16:25:26.483217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.735 [2024-11-20 16:25:26.483237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.735 qpair failed and we were unable to recover it. 00:30:50.735 [2024-11-20 16:25:26.493170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.735 [2024-11-20 16:25:26.493249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.735 [2024-11-20 16:25:26.493267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.735 [2024-11-20 16:25:26.493281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.735 [2024-11-20 16:25:26.493289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.735 [2024-11-20 16:25:26.493307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.735 qpair failed and we were unable to recover it. 00:30:50.735 [2024-11-20 16:25:26.503206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.735 [2024-11-20 16:25:26.503272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.735 [2024-11-20 16:25:26.503289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.735 [2024-11-20 16:25:26.503296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.735 [2024-11-20 16:25:26.503302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.735 [2024-11-20 16:25:26.503318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.735 qpair failed and we were unable to recover it. 00:30:50.735 [2024-11-20 16:25:26.513261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.735 [2024-11-20 16:25:26.513327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.735 [2024-11-20 16:25:26.513342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.735 [2024-11-20 16:25:26.513349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.735 [2024-11-20 16:25:26.513356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.735 [2024-11-20 16:25:26.513372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.735 qpair failed and we were unable to recover it. 00:30:50.735 [2024-11-20 16:25:26.523204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.735 [2024-11-20 16:25:26.523268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.735 [2024-11-20 16:25:26.523284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.735 [2024-11-20 16:25:26.523292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.735 [2024-11-20 16:25:26.523298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.735 [2024-11-20 16:25:26.523315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.735 qpair failed and we were unable to recover it. 00:30:50.735 [2024-11-20 16:25:26.533256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.735 [2024-11-20 16:25:26.533323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.735 [2024-11-20 16:25:26.533341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.735 [2024-11-20 16:25:26.533349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.735 [2024-11-20 16:25:26.533355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.735 [2024-11-20 16:25:26.533378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.735 qpair failed and we were unable to recover it. 00:30:50.735 [2024-11-20 16:25:26.543319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.735 [2024-11-20 16:25:26.543390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.735 [2024-11-20 16:25:26.543407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.735 [2024-11-20 16:25:26.543415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.735 [2024-11-20 16:25:26.543421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.735 [2024-11-20 16:25:26.543438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.735 qpair failed and we were unable to recover it. 00:30:50.735 [2024-11-20 16:25:26.553422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.735 [2024-11-20 16:25:26.553500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.735 [2024-11-20 16:25:26.553516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.735 [2024-11-20 16:25:26.553523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.735 [2024-11-20 16:25:26.553530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.735 [2024-11-20 16:25:26.553546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.735 qpair failed and we were unable to recover it. 00:30:50.735 [2024-11-20 16:25:26.563409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.735 [2024-11-20 16:25:26.563538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.735 [2024-11-20 16:25:26.563554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.735 [2024-11-20 16:25:26.563561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.735 [2024-11-20 16:25:26.563568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.735 [2024-11-20 16:25:26.563584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.735 qpair failed and we were unable to recover it. 00:30:50.735 [2024-11-20 16:25:26.573364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.735 [2024-11-20 16:25:26.573457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.735 [2024-11-20 16:25:26.573474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.735 [2024-11-20 16:25:26.573481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.735 [2024-11-20 16:25:26.573488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.735 [2024-11-20 16:25:26.573505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.735 qpair failed and we were unable to recover it. 00:30:50.735 [2024-11-20 16:25:26.583428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.736 [2024-11-20 16:25:26.583494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.736 [2024-11-20 16:25:26.583512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.736 [2024-11-20 16:25:26.583519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.736 [2024-11-20 16:25:26.583525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.736 [2024-11-20 16:25:26.583542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.736 qpair failed and we were unable to recover it. 00:30:50.736 [2024-11-20 16:25:26.593459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.736 [2024-11-20 16:25:26.593538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.736 [2024-11-20 16:25:26.593554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.736 [2024-11-20 16:25:26.593561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.736 [2024-11-20 16:25:26.593568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.736 [2024-11-20 16:25:26.593585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.736 qpair failed and we were unable to recover it. 00:30:50.736 [2024-11-20 16:25:26.603514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.736 [2024-11-20 16:25:26.603584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.736 [2024-11-20 16:25:26.603600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.736 [2024-11-20 16:25:26.603608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.736 [2024-11-20 16:25:26.603614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.736 [2024-11-20 16:25:26.603630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.736 qpair failed and we were unable to recover it. 00:30:50.736 [2024-11-20 16:25:26.613501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.736 [2024-11-20 16:25:26.613569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.736 [2024-11-20 16:25:26.613586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.736 [2024-11-20 16:25:26.613593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.736 [2024-11-20 16:25:26.613600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.736 [2024-11-20 16:25:26.613616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.736 qpair failed and we were unable to recover it. 00:30:50.736 [2024-11-20 16:25:26.623545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.736 [2024-11-20 16:25:26.623612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.736 [2024-11-20 16:25:26.623634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.736 [2024-11-20 16:25:26.623641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.736 [2024-11-20 16:25:26.623648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.736 [2024-11-20 16:25:26.623665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.736 qpair failed and we were unable to recover it. 00:30:50.736 [2024-11-20 16:25:26.633610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.736 [2024-11-20 16:25:26.633689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.736 [2024-11-20 16:25:26.633705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.736 [2024-11-20 16:25:26.633713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.736 [2024-11-20 16:25:26.633719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.736 [2024-11-20 16:25:26.633735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.736 qpair failed and we were unable to recover it. 00:30:50.736 [2024-11-20 16:25:26.643589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.736 [2024-11-20 16:25:26.643652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.736 [2024-11-20 16:25:26.643667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.736 [2024-11-20 16:25:26.643675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.736 [2024-11-20 16:25:26.643681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.736 [2024-11-20 16:25:26.643697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.736 qpair failed and we were unable to recover it. 00:30:50.736 [2024-11-20 16:25:26.653647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.736 [2024-11-20 16:25:26.653743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.736 [2024-11-20 16:25:26.653759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.736 [2024-11-20 16:25:26.653766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.736 [2024-11-20 16:25:26.653773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.736 [2024-11-20 16:25:26.653789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.736 qpair failed and we were unable to recover it. 00:30:50.736 [2024-11-20 16:25:26.663678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.736 [2024-11-20 16:25:26.663746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.736 [2024-11-20 16:25:26.663761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.736 [2024-11-20 16:25:26.663769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.736 [2024-11-20 16:25:26.663780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.736 [2024-11-20 16:25:26.663797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.736 qpair failed and we were unable to recover it. 00:30:50.999 [2024-11-20 16:25:26.673735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.999 [2024-11-20 16:25:26.673799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.999 [2024-11-20 16:25:26.673815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.999 [2024-11-20 16:25:26.673823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.999 [2024-11-20 16:25:26.673830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.999 [2024-11-20 16:25:26.673846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.999 qpair failed and we were unable to recover it. 00:30:50.999 [2024-11-20 16:25:26.683617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.999 [2024-11-20 16:25:26.683677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.999 [2024-11-20 16:25:26.683694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.999 [2024-11-20 16:25:26.683701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.999 [2024-11-20 16:25:26.683708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:50.999 [2024-11-20 16:25:26.683724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.999 qpair failed and we were unable to recover it. 00:30:50.999 [2024-11-20 16:25:26.693714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.999 [2024-11-20 16:25:26.693787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.000 [2024-11-20 16:25:26.693803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.000 [2024-11-20 16:25:26.693811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.000 [2024-11-20 16:25:26.693817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.000 [2024-11-20 16:25:26.693834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.000 qpair failed and we were unable to recover it. 00:30:51.000 [2024-11-20 16:25:26.703761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.000 [2024-11-20 16:25:26.703826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.000 [2024-11-20 16:25:26.703842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.000 [2024-11-20 16:25:26.703850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.000 [2024-11-20 16:25:26.703856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.000 [2024-11-20 16:25:26.703873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.000 qpair failed and we were unable to recover it. 00:30:51.000 [2024-11-20 16:25:26.713853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.000 [2024-11-20 16:25:26.713930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.000 [2024-11-20 16:25:26.713946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.000 [2024-11-20 16:25:26.713953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.000 [2024-11-20 16:25:26.713960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.000 [2024-11-20 16:25:26.713976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.000 qpair failed and we were unable to recover it. 00:30:51.000 [2024-11-20 16:25:26.723724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.000 [2024-11-20 16:25:26.723781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.000 [2024-11-20 16:25:26.723800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.000 [2024-11-20 16:25:26.723807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.000 [2024-11-20 16:25:26.723813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.000 [2024-11-20 16:25:26.723839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.000 qpair failed and we were unable to recover it. 00:30:51.000 [2024-11-20 16:25:26.733878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.000 [2024-11-20 16:25:26.733944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.000 [2024-11-20 16:25:26.733961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.000 [2024-11-20 16:25:26.733969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.000 [2024-11-20 16:25:26.733975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.000 [2024-11-20 16:25:26.733992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.000 qpair failed and we were unable to recover it. 00:30:51.000 [2024-11-20 16:25:26.743919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.000 [2024-11-20 16:25:26.743987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.000 [2024-11-20 16:25:26.744003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.000 [2024-11-20 16:25:26.744011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.000 [2024-11-20 16:25:26.744017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.000 [2024-11-20 16:25:26.744033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.000 qpair failed and we were unable to recover it. 00:30:51.000 [2024-11-20 16:25:26.753858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.000 [2024-11-20 16:25:26.753954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.000 [2024-11-20 16:25:26.753975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.000 [2024-11-20 16:25:26.753983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.000 [2024-11-20 16:25:26.753990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.000 [2024-11-20 16:25:26.754006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.000 qpair failed and we were unable to recover it. 00:30:51.000 [2024-11-20 16:25:26.763957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.000 [2024-11-20 16:25:26.764020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.000 [2024-11-20 16:25:26.764037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.000 [2024-11-20 16:25:26.764044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.000 [2024-11-20 16:25:26.764050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.000 [2024-11-20 16:25:26.764066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.000 qpair failed and we were unable to recover it. 00:30:51.000 [2024-11-20 16:25:26.774013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.000 [2024-11-20 16:25:26.774083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.000 [2024-11-20 16:25:26.774099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.000 [2024-11-20 16:25:26.774106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.000 [2024-11-20 16:25:26.774113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.000 [2024-11-20 16:25:26.774129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.000 qpair failed and we were unable to recover it. 00:30:51.000 [2024-11-20 16:25:26.783999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.000 [2024-11-20 16:25:26.784057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.000 [2024-11-20 16:25:26.784073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.000 [2024-11-20 16:25:26.784081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.000 [2024-11-20 16:25:26.784087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.000 [2024-11-20 16:25:26.784103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.000 qpair failed and we were unable to recover it. 00:30:51.000 [2024-11-20 16:25:26.794087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.000 [2024-11-20 16:25:26.794151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.000 [2024-11-20 16:25:26.794172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.000 [2024-11-20 16:25:26.794179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.000 [2024-11-20 16:25:26.794191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.000 [2024-11-20 16:25:26.794208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.000 qpair failed and we were unable to recover it. 00:30:51.000 [2024-11-20 16:25:26.804077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.001 [2024-11-20 16:25:26.804144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.001 [2024-11-20 16:25:26.804165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.001 [2024-11-20 16:25:26.804173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.001 [2024-11-20 16:25:26.804179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.001 [2024-11-20 16:25:26.804196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.001 qpair failed and we were unable to recover it. 00:30:51.001 [2024-11-20 16:25:26.813989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.001 [2024-11-20 16:25:26.814054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.001 [2024-11-20 16:25:26.814070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.001 [2024-11-20 16:25:26.814078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.001 [2024-11-20 16:25:26.814084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.001 [2024-11-20 16:25:26.814101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.001 qpair failed and we were unable to recover it. 00:30:51.001 [2024-11-20 16:25:26.824150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.001 [2024-11-20 16:25:26.824224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.001 [2024-11-20 16:25:26.824242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.001 [2024-11-20 16:25:26.824249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.001 [2024-11-20 16:25:26.824256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.001 [2024-11-20 16:25:26.824273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.001 qpair failed and we were unable to recover it. 00:30:51.001 [2024-11-20 16:25:26.834214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.001 [2024-11-20 16:25:26.834301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.001 [2024-11-20 16:25:26.834318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.001 [2024-11-20 16:25:26.834325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.001 [2024-11-20 16:25:26.834331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.001 [2024-11-20 16:25:26.834348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.001 qpair failed and we were unable to recover it. 00:30:51.001 [2024-11-20 16:25:26.844209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.001 [2024-11-20 16:25:26.844274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.001 [2024-11-20 16:25:26.844290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.001 [2024-11-20 16:25:26.844298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.001 [2024-11-20 16:25:26.844304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.001 [2024-11-20 16:25:26.844321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.001 qpair failed and we were unable to recover it. 00:30:51.001 [2024-11-20 16:25:26.854247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.001 [2024-11-20 16:25:26.854310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.001 [2024-11-20 16:25:26.854326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.001 [2024-11-20 16:25:26.854333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.001 [2024-11-20 16:25:26.854340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.001 [2024-11-20 16:25:26.854357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.001 qpair failed and we were unable to recover it. 00:30:51.001 [2024-11-20 16:25:26.864261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.001 [2024-11-20 16:25:26.864329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.001 [2024-11-20 16:25:26.864344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.001 [2024-11-20 16:25:26.864352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.001 [2024-11-20 16:25:26.864358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.001 [2024-11-20 16:25:26.864375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.001 qpair failed and we were unable to recover it. 00:30:51.001 [2024-11-20 16:25:26.874233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.001 [2024-11-20 16:25:26.874302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.001 [2024-11-20 16:25:26.874321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.001 [2024-11-20 16:25:26.874329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.001 [2024-11-20 16:25:26.874336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.001 [2024-11-20 16:25:26.874359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.001 qpair failed and we were unable to recover it. 00:30:51.001 [2024-11-20 16:25:26.884362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.001 [2024-11-20 16:25:26.884458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.001 [2024-11-20 16:25:26.884493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.001 [2024-11-20 16:25:26.884501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.001 [2024-11-20 16:25:26.884508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.001 [2024-11-20 16:25:26.884527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.001 qpair failed and we were unable to recover it. 00:30:51.001 [2024-11-20 16:25:26.894367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.001 [2024-11-20 16:25:26.894431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.001 [2024-11-20 16:25:26.894449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.001 [2024-11-20 16:25:26.894456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.001 [2024-11-20 16:25:26.894463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.001 [2024-11-20 16:25:26.894480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.001 qpair failed and we were unable to recover it. 00:30:51.001 [2024-11-20 16:25:26.904390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.001 [2024-11-20 16:25:26.904460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.001 [2024-11-20 16:25:26.904476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.001 [2024-11-20 16:25:26.904483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.001 [2024-11-20 16:25:26.904490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.001 [2024-11-20 16:25:26.904507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.001 qpair failed and we were unable to recover it. 00:30:51.001 [2024-11-20 16:25:26.914471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.001 [2024-11-20 16:25:26.914548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.001 [2024-11-20 16:25:26.914564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.001 [2024-11-20 16:25:26.914572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.001 [2024-11-20 16:25:26.914578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.001 [2024-11-20 16:25:26.914594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.001 qpair failed and we were unable to recover it. 00:30:51.002 [2024-11-20 16:25:26.924472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.002 [2024-11-20 16:25:26.924541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.002 [2024-11-20 16:25:26.924558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.002 [2024-11-20 16:25:26.924570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.002 [2024-11-20 16:25:26.924577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.002 [2024-11-20 16:25:26.924594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.002 qpair failed and we were unable to recover it. 00:30:51.265 [2024-11-20 16:25:26.934484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.265 [2024-11-20 16:25:26.934547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.265 [2024-11-20 16:25:26.934563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.265 [2024-11-20 16:25:26.934570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.265 [2024-11-20 16:25:26.934576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.265 [2024-11-20 16:25:26.934593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.265 qpair failed and we were unable to recover it. 00:30:51.265 [2024-11-20 16:25:26.944519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.265 [2024-11-20 16:25:26.944582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.265 [2024-11-20 16:25:26.944599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.265 [2024-11-20 16:25:26.944607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.265 [2024-11-20 16:25:26.944613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.265 [2024-11-20 16:25:26.944630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.265 qpair failed and we were unable to recover it. 00:30:51.265 [2024-11-20 16:25:26.954573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.265 [2024-11-20 16:25:26.954648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.265 [2024-11-20 16:25:26.954664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.265 [2024-11-20 16:25:26.954671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.265 [2024-11-20 16:25:26.954678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.265 [2024-11-20 16:25:26.954694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.265 qpair failed and we were unable to recover it. 00:30:51.265 [2024-11-20 16:25:26.964590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.265 [2024-11-20 16:25:26.964657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.265 [2024-11-20 16:25:26.964673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.265 [2024-11-20 16:25:26.964681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.265 [2024-11-20 16:25:26.964688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.265 [2024-11-20 16:25:26.964710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.265 qpair failed and we were unable to recover it. 00:30:51.265 [2024-11-20 16:25:26.974606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.265 [2024-11-20 16:25:26.974673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.265 [2024-11-20 16:25:26.974690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.265 [2024-11-20 16:25:26.974697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.265 [2024-11-20 16:25:26.974703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.265 [2024-11-20 16:25:26.974719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.265 qpair failed and we were unable to recover it. 00:30:51.265 [2024-11-20 16:25:26.984631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.265 [2024-11-20 16:25:26.984696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.265 [2024-11-20 16:25:26.984712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.265 [2024-11-20 16:25:26.984719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.265 [2024-11-20 16:25:26.984725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.265 [2024-11-20 16:25:26.984742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.265 qpair failed and we were unable to recover it. 00:30:51.265 [2024-11-20 16:25:26.994694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.265 [2024-11-20 16:25:26.994757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.265 [2024-11-20 16:25:26.994775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.265 [2024-11-20 16:25:26.994783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.265 [2024-11-20 16:25:26.994789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.265 [2024-11-20 16:25:26.994807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.265 qpair failed and we were unable to recover it. 00:30:51.265 [2024-11-20 16:25:27.004703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.265 [2024-11-20 16:25:27.004774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.265 [2024-11-20 16:25:27.004791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.265 [2024-11-20 16:25:27.004798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.265 [2024-11-20 16:25:27.004804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.265 [2024-11-20 16:25:27.004821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.265 qpair failed and we were unable to recover it. 00:30:51.265 [2024-11-20 16:25:27.014737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.265 [2024-11-20 16:25:27.014800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.265 [2024-11-20 16:25:27.014818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.265 [2024-11-20 16:25:27.014825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.265 [2024-11-20 16:25:27.014832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.265 [2024-11-20 16:25:27.014848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.265 qpair failed and we were unable to recover it. 00:30:51.265 [2024-11-20 16:25:27.024769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.265 [2024-11-20 16:25:27.024833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.265 [2024-11-20 16:25:27.024851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.265 [2024-11-20 16:25:27.024858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.265 [2024-11-20 16:25:27.024865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.265 [2024-11-20 16:25:27.024882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.265 qpair failed and we were unable to recover it. 00:30:51.265 [2024-11-20 16:25:27.034808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.265 [2024-11-20 16:25:27.034883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.265 [2024-11-20 16:25:27.034899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.265 [2024-11-20 16:25:27.034906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.266 [2024-11-20 16:25:27.034912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.266 [2024-11-20 16:25:27.034928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.266 qpair failed and we were unable to recover it. 00:30:51.266 [2024-11-20 16:25:27.044827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.266 [2024-11-20 16:25:27.044901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.266 [2024-11-20 16:25:27.044936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.266 [2024-11-20 16:25:27.044946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.266 [2024-11-20 16:25:27.044954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.266 [2024-11-20 16:25:27.044978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.266 qpair failed and we were unable to recover it. 00:30:51.266 [2024-11-20 16:25:27.054834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.266 [2024-11-20 16:25:27.054902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.266 [2024-11-20 16:25:27.054922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.266 [2024-11-20 16:25:27.054937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.266 [2024-11-20 16:25:27.054945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.266 [2024-11-20 16:25:27.054964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.266 qpair failed and we were unable to recover it. 00:30:51.266 [2024-11-20 16:25:27.064874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.266 [2024-11-20 16:25:27.064939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.266 [2024-11-20 16:25:27.064956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.266 [2024-11-20 16:25:27.064964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.266 [2024-11-20 16:25:27.064970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.266 [2024-11-20 16:25:27.064988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.266 qpair failed and we were unable to recover it. 00:30:51.266 [2024-11-20 16:25:27.074921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.266 [2024-11-20 16:25:27.074997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.266 [2024-11-20 16:25:27.075014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.266 [2024-11-20 16:25:27.075022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.266 [2024-11-20 16:25:27.075028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.266 [2024-11-20 16:25:27.075045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.266 qpair failed and we were unable to recover it. 00:30:51.266 [2024-11-20 16:25:27.084915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.266 [2024-11-20 16:25:27.084978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.266 [2024-11-20 16:25:27.084995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.266 [2024-11-20 16:25:27.085003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.266 [2024-11-20 16:25:27.085009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.266 [2024-11-20 16:25:27.085025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.266 qpair failed and we were unable to recover it. 00:30:51.266 [2024-11-20 16:25:27.094963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.266 [2024-11-20 16:25:27.095040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.266 [2024-11-20 16:25:27.095057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.266 [2024-11-20 16:25:27.095064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.266 [2024-11-20 16:25:27.095071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.266 [2024-11-20 16:25:27.095093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.266 qpair failed and we were unable to recover it. 00:30:51.266 [2024-11-20 16:25:27.104998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.266 [2024-11-20 16:25:27.105070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.266 [2024-11-20 16:25:27.105087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.266 [2024-11-20 16:25:27.105094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.266 [2024-11-20 16:25:27.105101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.266 [2024-11-20 16:25:27.105118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.266 qpair failed and we were unable to recover it. 00:30:51.266 [2024-11-20 16:25:27.115048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.266 [2024-11-20 16:25:27.115125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.266 [2024-11-20 16:25:27.115141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.266 [2024-11-20 16:25:27.115149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.266 [2024-11-20 16:25:27.115155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.266 [2024-11-20 16:25:27.115177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.266 qpair failed and we were unable to recover it. 00:30:51.266 [2024-11-20 16:25:27.124974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.266 [2024-11-20 16:25:27.125046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.266 [2024-11-20 16:25:27.125066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.266 [2024-11-20 16:25:27.125073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.266 [2024-11-20 16:25:27.125080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.266 [2024-11-20 16:25:27.125104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.266 qpair failed and we were unable to recover it. 00:30:51.266 [2024-11-20 16:25:27.135100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.266 [2024-11-20 16:25:27.135173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.266 [2024-11-20 16:25:27.135192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.266 [2024-11-20 16:25:27.135199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.266 [2024-11-20 16:25:27.135206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.266 [2024-11-20 16:25:27.135224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.266 qpair failed and we were unable to recover it. 00:30:51.266 [2024-11-20 16:25:27.145118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.266 [2024-11-20 16:25:27.145208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.266 [2024-11-20 16:25:27.145226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.266 [2024-11-20 16:25:27.145233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.266 [2024-11-20 16:25:27.145240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.266 [2024-11-20 16:25:27.145258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.266 qpair failed and we were unable to recover it. 00:30:51.266 [2024-11-20 16:25:27.155175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.266 [2024-11-20 16:25:27.155250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.266 [2024-11-20 16:25:27.155266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.266 [2024-11-20 16:25:27.155274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.266 [2024-11-20 16:25:27.155281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.266 [2024-11-20 16:25:27.155298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.266 qpair failed and we were unable to recover it. 00:30:51.266 [2024-11-20 16:25:27.165183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.266 [2024-11-20 16:25:27.165254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.266 [2024-11-20 16:25:27.165271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.267 [2024-11-20 16:25:27.165278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.267 [2024-11-20 16:25:27.165285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.267 [2024-11-20 16:25:27.165301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.267 qpair failed and we were unable to recover it. 00:30:51.267 [2024-11-20 16:25:27.175257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.267 [2024-11-20 16:25:27.175324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.267 [2024-11-20 16:25:27.175340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.267 [2024-11-20 16:25:27.175348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.267 [2024-11-20 16:25:27.175355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.267 [2024-11-20 16:25:27.175372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.267 qpair failed and we were unable to recover it. 00:30:51.267 [2024-11-20 16:25:27.185222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.267 [2024-11-20 16:25:27.185284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.267 [2024-11-20 16:25:27.185305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.267 [2024-11-20 16:25:27.185313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.267 [2024-11-20 16:25:27.185319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.267 [2024-11-20 16:25:27.185336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.267 qpair failed and we were unable to recover it. 00:30:51.267 [2024-11-20 16:25:27.195324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.267 [2024-11-20 16:25:27.195407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.267 [2024-11-20 16:25:27.195423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.267 [2024-11-20 16:25:27.195430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.267 [2024-11-20 16:25:27.195437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.267 [2024-11-20 16:25:27.195454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.267 qpair failed and we were unable to recover it. 00:30:51.530 [2024-11-20 16:25:27.205310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.530 [2024-11-20 16:25:27.205377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.530 [2024-11-20 16:25:27.205392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.530 [2024-11-20 16:25:27.205400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.530 [2024-11-20 16:25:27.205407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.530 [2024-11-20 16:25:27.205423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.530 qpair failed and we were unable to recover it. 00:30:51.530 [2024-11-20 16:25:27.215326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.530 [2024-11-20 16:25:27.215392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.530 [2024-11-20 16:25:27.215408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.530 [2024-11-20 16:25:27.215416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.530 [2024-11-20 16:25:27.215423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.530 [2024-11-20 16:25:27.215440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.530 qpair failed and we were unable to recover it. 00:30:51.530 [2024-11-20 16:25:27.225371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.530 [2024-11-20 16:25:27.225434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.530 [2024-11-20 16:25:27.225451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.530 [2024-11-20 16:25:27.225459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.530 [2024-11-20 16:25:27.225472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.530 [2024-11-20 16:25:27.225489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.530 qpair failed and we were unable to recover it. 00:30:51.530 [2024-11-20 16:25:27.235417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.530 [2024-11-20 16:25:27.235478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.530 [2024-11-20 16:25:27.235494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.530 [2024-11-20 16:25:27.235502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.530 [2024-11-20 16:25:27.235508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.530 [2024-11-20 16:25:27.235525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.530 qpair failed and we were unable to recover it. 00:30:51.530 [2024-11-20 16:25:27.245418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.530 [2024-11-20 16:25:27.245536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.530 [2024-11-20 16:25:27.245553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.530 [2024-11-20 16:25:27.245562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.530 [2024-11-20 16:25:27.245570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.530 [2024-11-20 16:25:27.245587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.530 qpair failed and we were unable to recover it. 00:30:51.530 [2024-11-20 16:25:27.255465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.530 [2024-11-20 16:25:27.255524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.530 [2024-11-20 16:25:27.255540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.530 [2024-11-20 16:25:27.255547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.530 [2024-11-20 16:25:27.255554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.530 [2024-11-20 16:25:27.255571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.530 qpair failed and we were unable to recover it. 00:30:51.530 [2024-11-20 16:25:27.265491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.530 [2024-11-20 16:25:27.265558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.530 [2024-11-20 16:25:27.265574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.530 [2024-11-20 16:25:27.265582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.530 [2024-11-20 16:25:27.265588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.530 [2024-11-20 16:25:27.265605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.530 qpair failed and we were unable to recover it. 00:30:51.530 [2024-11-20 16:25:27.275545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.530 [2024-11-20 16:25:27.275619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.530 [2024-11-20 16:25:27.275636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.530 [2024-11-20 16:25:27.275643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.530 [2024-11-20 16:25:27.275650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.530 [2024-11-20 16:25:27.275666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.530 qpair failed and we were unable to recover it. 00:30:51.530 [2024-11-20 16:25:27.285546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.530 [2024-11-20 16:25:27.285612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.531 [2024-11-20 16:25:27.285628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.531 [2024-11-20 16:25:27.285635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.531 [2024-11-20 16:25:27.285642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.531 [2024-11-20 16:25:27.285658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.531 qpair failed and we were unable to recover it. 00:30:51.531 [2024-11-20 16:25:27.295432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.531 [2024-11-20 16:25:27.295499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.531 [2024-11-20 16:25:27.295515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.531 [2024-11-20 16:25:27.295522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.531 [2024-11-20 16:25:27.295529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.531 [2024-11-20 16:25:27.295546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.531 qpair failed and we were unable to recover it. 00:30:51.531 [2024-11-20 16:25:27.305475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.531 [2024-11-20 16:25:27.305541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.531 [2024-11-20 16:25:27.305561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.531 [2024-11-20 16:25:27.305570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.531 [2024-11-20 16:25:27.305578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.531 [2024-11-20 16:25:27.305599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.531 qpair failed and we were unable to recover it. 00:30:51.531 [2024-11-20 16:25:27.315654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.531 [2024-11-20 16:25:27.315721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.531 [2024-11-20 16:25:27.315744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.531 [2024-11-20 16:25:27.315752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.531 [2024-11-20 16:25:27.315758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.531 [2024-11-20 16:25:27.315776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.531 qpair failed and we were unable to recover it. 00:30:51.531 [2024-11-20 16:25:27.325634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.531 [2024-11-20 16:25:27.325698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.531 [2024-11-20 16:25:27.325715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.531 [2024-11-20 16:25:27.325723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.531 [2024-11-20 16:25:27.325730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.531 [2024-11-20 16:25:27.325747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.531 qpair failed and we were unable to recover it. 00:30:51.531 [2024-11-20 16:25:27.335680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.531 [2024-11-20 16:25:27.335745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.531 [2024-11-20 16:25:27.335762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.531 [2024-11-20 16:25:27.335769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.531 [2024-11-20 16:25:27.335776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.531 [2024-11-20 16:25:27.335792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.531 qpair failed and we were unable to recover it. 00:30:51.531 [2024-11-20 16:25:27.345711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.531 [2024-11-20 16:25:27.345775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.531 [2024-11-20 16:25:27.345791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.531 [2024-11-20 16:25:27.345798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.531 [2024-11-20 16:25:27.345805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.531 [2024-11-20 16:25:27.345821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.531 qpair failed and we were unable to recover it. 00:30:51.531 [2024-11-20 16:25:27.355778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.531 [2024-11-20 16:25:27.355851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.531 [2024-11-20 16:25:27.355868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.531 [2024-11-20 16:25:27.355875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.531 [2024-11-20 16:25:27.355887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.531 [2024-11-20 16:25:27.355903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.531 qpair failed and we were unable to recover it. 00:30:51.531 [2024-11-20 16:25:27.365734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.531 [2024-11-20 16:25:27.365795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.531 [2024-11-20 16:25:27.365813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.531 [2024-11-20 16:25:27.365820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.531 [2024-11-20 16:25:27.365826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.531 [2024-11-20 16:25:27.365843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.531 qpair failed and we were unable to recover it. 00:30:51.531 [2024-11-20 16:25:27.375780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.531 [2024-11-20 16:25:27.375848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.531 [2024-11-20 16:25:27.375864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.531 [2024-11-20 16:25:27.375871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.531 [2024-11-20 16:25:27.375878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.531 [2024-11-20 16:25:27.375894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.531 qpair failed and we were unable to recover it. 00:30:51.531 [2024-11-20 16:25:27.385811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.531 [2024-11-20 16:25:27.385876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.531 [2024-11-20 16:25:27.385892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.531 [2024-11-20 16:25:27.385900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.531 [2024-11-20 16:25:27.385906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.531 [2024-11-20 16:25:27.385922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.532 qpair failed and we were unable to recover it. 00:30:51.532 [2024-11-20 16:25:27.395774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.532 [2024-11-20 16:25:27.395866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.532 [2024-11-20 16:25:27.395882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.532 [2024-11-20 16:25:27.395889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.532 [2024-11-20 16:25:27.395896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.532 [2024-11-20 16:25:27.395913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.532 qpair failed and we were unable to recover it. 00:30:51.532 [2024-11-20 16:25:27.405747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.532 [2024-11-20 16:25:27.405809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.532 [2024-11-20 16:25:27.405827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.532 [2024-11-20 16:25:27.405834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.532 [2024-11-20 16:25:27.405841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.532 [2024-11-20 16:25:27.405863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.532 qpair failed and we were unable to recover it. 00:30:51.532 [2024-11-20 16:25:27.415908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.532 [2024-11-20 16:25:27.416001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.532 [2024-11-20 16:25:27.416027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.532 [2024-11-20 16:25:27.416034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.532 [2024-11-20 16:25:27.416041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.532 [2024-11-20 16:25:27.416061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.532 qpair failed and we were unable to recover it. 00:30:51.532 [2024-11-20 16:25:27.425911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.532 [2024-11-20 16:25:27.425980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.532 [2024-11-20 16:25:27.425998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.532 [2024-11-20 16:25:27.426006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.532 [2024-11-20 16:25:27.426013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.532 [2024-11-20 16:25:27.426030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.532 qpair failed and we were unable to recover it. 00:30:51.532 [2024-11-20 16:25:27.436014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.532 [2024-11-20 16:25:27.436087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.532 [2024-11-20 16:25:27.436104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.532 [2024-11-20 16:25:27.436112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.532 [2024-11-20 16:25:27.436118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.532 [2024-11-20 16:25:27.436135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.532 qpair failed and we were unable to recover it. 00:30:51.532 [2024-11-20 16:25:27.446016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.532 [2024-11-20 16:25:27.446079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.532 [2024-11-20 16:25:27.446097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.532 [2024-11-20 16:25:27.446104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.532 [2024-11-20 16:25:27.446111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.532 [2024-11-20 16:25:27.446128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.532 qpair failed and we were unable to recover it. 00:30:51.532 [2024-11-20 16:25:27.456030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.532 [2024-11-20 16:25:27.456091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.532 [2024-11-20 16:25:27.456107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.532 [2024-11-20 16:25:27.456114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.532 [2024-11-20 16:25:27.456121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.532 [2024-11-20 16:25:27.456138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.532 qpair failed and we were unable to recover it. 00:30:51.795 [2024-11-20 16:25:27.466058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.795 [2024-11-20 16:25:27.466122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.795 [2024-11-20 16:25:27.466138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.795 [2024-11-20 16:25:27.466146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.795 [2024-11-20 16:25:27.466153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.795 [2024-11-20 16:25:27.466174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.795 qpair failed and we were unable to recover it. 00:30:51.795 [2024-11-20 16:25:27.476143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.795 [2024-11-20 16:25:27.476272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.795 [2024-11-20 16:25:27.476288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.795 [2024-11-20 16:25:27.476296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.795 [2024-11-20 16:25:27.476304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.795 [2024-11-20 16:25:27.476322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.795 qpair failed and we were unable to recover it. 00:30:51.795 [2024-11-20 16:25:27.486115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.795 [2024-11-20 16:25:27.486187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.795 [2024-11-20 16:25:27.486204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.795 [2024-11-20 16:25:27.486219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.795 [2024-11-20 16:25:27.486226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.795 [2024-11-20 16:25:27.486243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.795 qpair failed and we were unable to recover it. 00:30:51.795 [2024-11-20 16:25:27.496220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.795 [2024-11-20 16:25:27.496287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.795 [2024-11-20 16:25:27.496304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.795 [2024-11-20 16:25:27.496312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.795 [2024-11-20 16:25:27.496318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.795 [2024-11-20 16:25:27.496335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.795 qpair failed and we were unable to recover it. 00:30:51.795 [2024-11-20 16:25:27.506196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.795 [2024-11-20 16:25:27.506261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.795 [2024-11-20 16:25:27.506277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.795 [2024-11-20 16:25:27.506284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.795 [2024-11-20 16:25:27.506290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.795 [2024-11-20 16:25:27.506307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.795 qpair failed and we were unable to recover it. 00:30:51.795 [2024-11-20 16:25:27.516226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.795 [2024-11-20 16:25:27.516297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.795 [2024-11-20 16:25:27.516314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.795 [2024-11-20 16:25:27.516322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.795 [2024-11-20 16:25:27.516328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.795 [2024-11-20 16:25:27.516346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.795 qpair failed and we were unable to recover it. 00:30:51.795 [2024-11-20 16:25:27.526249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.795 [2024-11-20 16:25:27.526322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.795 [2024-11-20 16:25:27.526339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.795 [2024-11-20 16:25:27.526347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.795 [2024-11-20 16:25:27.526353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.795 [2024-11-20 16:25:27.526377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.795 qpair failed and we were unable to recover it. 00:30:51.795 [2024-11-20 16:25:27.536183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.795 [2024-11-20 16:25:27.536252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.795 [2024-11-20 16:25:27.536269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.795 [2024-11-20 16:25:27.536277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.795 [2024-11-20 16:25:27.536284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.795 [2024-11-20 16:25:27.536300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.795 qpair failed and we were unable to recover it. 00:30:51.795 [2024-11-20 16:25:27.546319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.796 [2024-11-20 16:25:27.546390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.796 [2024-11-20 16:25:27.546409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.796 [2024-11-20 16:25:27.546417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.796 [2024-11-20 16:25:27.546424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.796 [2024-11-20 16:25:27.546443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.796 qpair failed and we were unable to recover it. 00:30:51.796 [2024-11-20 16:25:27.556340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.796 [2024-11-20 16:25:27.556412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.796 [2024-11-20 16:25:27.556429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.796 [2024-11-20 16:25:27.556437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.796 [2024-11-20 16:25:27.556444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.796 [2024-11-20 16:25:27.556462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.796 qpair failed and we were unable to recover it. 00:30:51.796 [2024-11-20 16:25:27.566498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.796 [2024-11-20 16:25:27.566622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.796 [2024-11-20 16:25:27.566639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.796 [2024-11-20 16:25:27.566648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.796 [2024-11-20 16:25:27.566656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.796 [2024-11-20 16:25:27.566674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.796 qpair failed and we were unable to recover it. 00:30:51.796 [2024-11-20 16:25:27.576482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.796 [2024-11-20 16:25:27.576547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.796 [2024-11-20 16:25:27.576564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.796 [2024-11-20 16:25:27.576572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.796 [2024-11-20 16:25:27.576579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.796 [2024-11-20 16:25:27.576598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.796 qpair failed and we were unable to recover it. 00:30:51.796 [2024-11-20 16:25:27.586507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.796 [2024-11-20 16:25:27.586574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.796 [2024-11-20 16:25:27.586591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.796 [2024-11-20 16:25:27.586599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.796 [2024-11-20 16:25:27.586605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.796 [2024-11-20 16:25:27.586622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.796 qpair failed and we were unable to recover it. 00:30:51.796 [2024-11-20 16:25:27.596561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.796 [2024-11-20 16:25:27.596659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.796 [2024-11-20 16:25:27.596675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.796 [2024-11-20 16:25:27.596682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.796 [2024-11-20 16:25:27.596688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.796 [2024-11-20 16:25:27.596705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.796 qpair failed and we were unable to recover it. 00:30:51.796 [2024-11-20 16:25:27.606519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.796 [2024-11-20 16:25:27.606587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.796 [2024-11-20 16:25:27.606603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.796 [2024-11-20 16:25:27.606610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.796 [2024-11-20 16:25:27.606617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.796 [2024-11-20 16:25:27.606633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.796 qpair failed and we were unable to recover it. 00:30:51.796 [2024-11-20 16:25:27.616524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.796 [2024-11-20 16:25:27.616590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.796 [2024-11-20 16:25:27.616605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.796 [2024-11-20 16:25:27.616619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.796 [2024-11-20 16:25:27.616625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.796 [2024-11-20 16:25:27.616642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.796 qpair failed and we were unable to recover it. 00:30:51.796 [2024-11-20 16:25:27.626449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.796 [2024-11-20 16:25:27.626515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.796 [2024-11-20 16:25:27.626532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.796 [2024-11-20 16:25:27.626539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.796 [2024-11-20 16:25:27.626546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.796 [2024-11-20 16:25:27.626563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.796 qpair failed and we were unable to recover it. 00:30:51.796 [2024-11-20 16:25:27.636630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.796 [2024-11-20 16:25:27.636700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.796 [2024-11-20 16:25:27.636715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.796 [2024-11-20 16:25:27.636723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.796 [2024-11-20 16:25:27.636729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.796 [2024-11-20 16:25:27.636746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.796 qpair failed and we were unable to recover it. 00:30:51.796 [2024-11-20 16:25:27.646652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.796 [2024-11-20 16:25:27.646711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.796 [2024-11-20 16:25:27.646727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.796 [2024-11-20 16:25:27.646735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.796 [2024-11-20 16:25:27.646743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.796 [2024-11-20 16:25:27.646760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.796 qpair failed and we were unable to recover it. 00:30:51.796 [2024-11-20 16:25:27.656478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.796 [2024-11-20 16:25:27.656538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.796 [2024-11-20 16:25:27.656556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.796 [2024-11-20 16:25:27.656564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.796 [2024-11-20 16:25:27.656571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.796 [2024-11-20 16:25:27.656598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.796 qpair failed and we were unable to recover it. 00:30:51.796 [2024-11-20 16:25:27.666621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.796 [2024-11-20 16:25:27.666682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.796 [2024-11-20 16:25:27.666697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.796 [2024-11-20 16:25:27.666704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.796 [2024-11-20 16:25:27.666711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.796 [2024-11-20 16:25:27.666727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.797 qpair failed and we were unable to recover it. 00:30:51.797 [2024-11-20 16:25:27.676582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.797 [2024-11-20 16:25:27.676648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.797 [2024-11-20 16:25:27.676664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.797 [2024-11-20 16:25:27.676671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.797 [2024-11-20 16:25:27.676677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.797 [2024-11-20 16:25:27.676693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.797 qpair failed and we were unable to recover it. 00:30:51.797 [2024-11-20 16:25:27.686695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.797 [2024-11-20 16:25:27.686754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.797 [2024-11-20 16:25:27.686768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.797 [2024-11-20 16:25:27.686775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.797 [2024-11-20 16:25:27.686782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.797 [2024-11-20 16:25:27.686797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.797 qpair failed and we were unable to recover it. 00:30:51.797 [2024-11-20 16:25:27.696659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.797 [2024-11-20 16:25:27.696717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.797 [2024-11-20 16:25:27.696732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.797 [2024-11-20 16:25:27.696739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.797 [2024-11-20 16:25:27.696745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.797 [2024-11-20 16:25:27.696761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.797 qpair failed and we were unable to recover it. 00:30:51.797 [2024-11-20 16:25:27.706771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.797 [2024-11-20 16:25:27.706862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.797 [2024-11-20 16:25:27.706877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.797 [2024-11-20 16:25:27.706884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.797 [2024-11-20 16:25:27.706891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.797 [2024-11-20 16:25:27.706906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.797 qpair failed and we were unable to recover it. 00:30:51.797 [2024-11-20 16:25:27.716820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.797 [2024-11-20 16:25:27.716912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.797 [2024-11-20 16:25:27.716926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.797 [2024-11-20 16:25:27.716933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.797 [2024-11-20 16:25:27.716939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.797 [2024-11-20 16:25:27.716954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.797 qpair failed and we were unable to recover it. 00:30:51.797 [2024-11-20 16:25:27.726787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.797 [2024-11-20 16:25:27.726841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.797 [2024-11-20 16:25:27.726856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.797 [2024-11-20 16:25:27.726863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.797 [2024-11-20 16:25:27.726870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:51.797 [2024-11-20 16:25:27.726885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.797 qpair failed and we were unable to recover it. 00:30:52.059 [2024-11-20 16:25:27.736808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.059 [2024-11-20 16:25:27.736868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.059 [2024-11-20 16:25:27.736883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.059 [2024-11-20 16:25:27.736890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.059 [2024-11-20 16:25:27.736897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.059 [2024-11-20 16:25:27.736912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.059 qpair failed and we were unable to recover it. 00:30:52.059 [2024-11-20 16:25:27.746871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.059 [2024-11-20 16:25:27.746964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.059 [2024-11-20 16:25:27.746982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.059 [2024-11-20 16:25:27.746989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.059 [2024-11-20 16:25:27.746995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.059 [2024-11-20 16:25:27.747010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.059 qpair failed and we were unable to recover it. 00:30:52.060 [2024-11-20 16:25:27.756897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.060 [2024-11-20 16:25:27.756954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.060 [2024-11-20 16:25:27.756968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.060 [2024-11-20 16:25:27.756975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.060 [2024-11-20 16:25:27.756982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.060 [2024-11-20 16:25:27.756997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.060 qpair failed and we were unable to recover it. 00:30:52.060 [2024-11-20 16:25:27.766894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.060 [2024-11-20 16:25:27.766984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.060 [2024-11-20 16:25:27.767000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.060 [2024-11-20 16:25:27.767007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.060 [2024-11-20 16:25:27.767014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.060 [2024-11-20 16:25:27.767034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.060 qpair failed and we were unable to recover it. 00:30:52.060 [2024-11-20 16:25:27.776895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.060 [2024-11-20 16:25:27.776942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.060 [2024-11-20 16:25:27.776956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.060 [2024-11-20 16:25:27.776963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.060 [2024-11-20 16:25:27.776969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.060 [2024-11-20 16:25:27.776984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.060 qpair failed and we were unable to recover it. 00:30:52.060 [2024-11-20 16:25:27.786968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.060 [2024-11-20 16:25:27.787059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.060 [2024-11-20 16:25:27.787072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.060 [2024-11-20 16:25:27.787079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.060 [2024-11-20 16:25:27.787092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.060 [2024-11-20 16:25:27.787107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.060 qpair failed and we were unable to recover it. 00:30:52.060 [2024-11-20 16:25:27.797005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.060 [2024-11-20 16:25:27.797061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.060 [2024-11-20 16:25:27.797075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.060 [2024-11-20 16:25:27.797081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.060 [2024-11-20 16:25:27.797088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.060 [2024-11-20 16:25:27.797102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.060 qpair failed and we were unable to recover it. 00:30:52.060 [2024-11-20 16:25:27.807014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.060 [2024-11-20 16:25:27.807093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.060 [2024-11-20 16:25:27.807107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.060 [2024-11-20 16:25:27.807113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.060 [2024-11-20 16:25:27.807120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.060 [2024-11-20 16:25:27.807135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.060 qpair failed and we were unable to recover it. 00:30:52.060 [2024-11-20 16:25:27.817017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.060 [2024-11-20 16:25:27.817065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.060 [2024-11-20 16:25:27.817078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.060 [2024-11-20 16:25:27.817085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.060 [2024-11-20 16:25:27.817091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.060 [2024-11-20 16:25:27.817106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.060 qpair failed and we were unable to recover it. 00:30:52.060 [2024-11-20 16:25:27.827064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.060 [2024-11-20 16:25:27.827121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.060 [2024-11-20 16:25:27.827134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.060 [2024-11-20 16:25:27.827141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.060 [2024-11-20 16:25:27.827147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.060 [2024-11-20 16:25:27.827165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.060 qpair failed and we were unable to recover it. 00:30:52.060 [2024-11-20 16:25:27.837102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.060 [2024-11-20 16:25:27.837162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.060 [2024-11-20 16:25:27.837175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.060 [2024-11-20 16:25:27.837181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.060 [2024-11-20 16:25:27.837188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.060 [2024-11-20 16:25:27.837202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.060 qpair failed and we were unable to recover it. 00:30:52.060 [2024-11-20 16:25:27.847127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.060 [2024-11-20 16:25:27.847180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.060 [2024-11-20 16:25:27.847193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.060 [2024-11-20 16:25:27.847200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.060 [2024-11-20 16:25:27.847206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.060 [2024-11-20 16:25:27.847222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.060 qpair failed and we were unable to recover it. 00:30:52.060 [2024-11-20 16:25:27.857124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.060 [2024-11-20 16:25:27.857197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.060 [2024-11-20 16:25:27.857210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.061 [2024-11-20 16:25:27.857217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.061 [2024-11-20 16:25:27.857223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.061 [2024-11-20 16:25:27.857237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.061 qpair failed and we were unable to recover it. 00:30:52.061 [2024-11-20 16:25:27.867169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.061 [2024-11-20 16:25:27.867225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.061 [2024-11-20 16:25:27.867238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.061 [2024-11-20 16:25:27.867245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.061 [2024-11-20 16:25:27.867251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.061 [2024-11-20 16:25:27.867265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.061 qpair failed and we were unable to recover it. 00:30:52.061 [2024-11-20 16:25:27.877231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.061 [2024-11-20 16:25:27.877293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.061 [2024-11-20 16:25:27.877309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.061 [2024-11-20 16:25:27.877316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.061 [2024-11-20 16:25:27.877323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.061 [2024-11-20 16:25:27.877337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.061 qpair failed and we were unable to recover it. 00:30:52.061 [2024-11-20 16:25:27.887285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.061 [2024-11-20 16:25:27.887354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.061 [2024-11-20 16:25:27.887367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.061 [2024-11-20 16:25:27.887374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.061 [2024-11-20 16:25:27.887380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.061 [2024-11-20 16:25:27.887394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.061 qpair failed and we were unable to recover it. 00:30:52.061 [2024-11-20 16:25:27.897238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.061 [2024-11-20 16:25:27.897310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.061 [2024-11-20 16:25:27.897322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.061 [2024-11-20 16:25:27.897329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.061 [2024-11-20 16:25:27.897336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.061 [2024-11-20 16:25:27.897350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.061 qpair failed and we were unable to recover it. 00:30:52.061 [2024-11-20 16:25:27.907317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.061 [2024-11-20 16:25:27.907374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.061 [2024-11-20 16:25:27.907387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.061 [2024-11-20 16:25:27.907394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.061 [2024-11-20 16:25:27.907400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.061 [2024-11-20 16:25:27.907415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.061 qpair failed and we were unable to recover it. 00:30:52.061 [2024-11-20 16:25:27.917230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.061 [2024-11-20 16:25:27.917284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.061 [2024-11-20 16:25:27.917297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.061 [2024-11-20 16:25:27.917304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.061 [2024-11-20 16:25:27.917314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.061 [2024-11-20 16:25:27.917328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.061 qpair failed and we were unable to recover it. 00:30:52.061 [2024-11-20 16:25:27.927352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.061 [2024-11-20 16:25:27.927406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.061 [2024-11-20 16:25:27.927419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.061 [2024-11-20 16:25:27.927426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.061 [2024-11-20 16:25:27.927432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.061 [2024-11-20 16:25:27.927447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.061 qpair failed and we were unable to recover it. 00:30:52.061 [2024-11-20 16:25:27.937362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.061 [2024-11-20 16:25:27.937409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.061 [2024-11-20 16:25:27.937422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.061 [2024-11-20 16:25:27.937429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.061 [2024-11-20 16:25:27.937435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.061 [2024-11-20 16:25:27.937449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.061 qpair failed and we were unable to recover it. 00:30:52.061 [2024-11-20 16:25:27.947436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.061 [2024-11-20 16:25:27.947494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.061 [2024-11-20 16:25:27.947507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.061 [2024-11-20 16:25:27.947513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.061 [2024-11-20 16:25:27.947520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.061 [2024-11-20 16:25:27.947534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.061 qpair failed and we were unable to recover it. 00:30:52.061 [2024-11-20 16:25:27.957365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.062 [2024-11-20 16:25:27.957419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.062 [2024-11-20 16:25:27.957432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.062 [2024-11-20 16:25:27.957439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.062 [2024-11-20 16:25:27.957445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.062 [2024-11-20 16:25:27.957459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.062 qpair failed and we were unable to recover it. 00:30:52.062 [2024-11-20 16:25:27.967467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.062 [2024-11-20 16:25:27.967528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.062 [2024-11-20 16:25:27.967541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.062 [2024-11-20 16:25:27.967548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.062 [2024-11-20 16:25:27.967554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.062 [2024-11-20 16:25:27.967568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.062 qpair failed and we were unable to recover it. 00:30:52.062 [2024-11-20 16:25:27.977465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.062 [2024-11-20 16:25:27.977511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.062 [2024-11-20 16:25:27.977524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.062 [2024-11-20 16:25:27.977531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.062 [2024-11-20 16:25:27.977537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.062 [2024-11-20 16:25:27.977551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.062 qpair failed and we were unable to recover it. 00:30:52.062 [2024-11-20 16:25:27.987419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.062 [2024-11-20 16:25:27.987477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.062 [2024-11-20 16:25:27.987489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.062 [2024-11-20 16:25:27.987496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.062 [2024-11-20 16:25:27.987503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.062 [2024-11-20 16:25:27.987517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.062 qpair failed and we were unable to recover it. 00:30:52.324 [2024-11-20 16:25:27.997576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.324 [2024-11-20 16:25:27.997637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.324 [2024-11-20 16:25:27.997650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.324 [2024-11-20 16:25:27.997657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.324 [2024-11-20 16:25:27.997663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.324 [2024-11-20 16:25:27.997677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.324 qpair failed and we were unable to recover it. 00:30:52.324 [2024-11-20 16:25:28.007585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.324 [2024-11-20 16:25:28.007638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.324 [2024-11-20 16:25:28.007651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.324 [2024-11-20 16:25:28.007658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.324 [2024-11-20 16:25:28.007664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.324 [2024-11-20 16:25:28.007678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.324 qpair failed and we were unable to recover it. 00:30:52.324 [2024-11-20 16:25:28.017610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.324 [2024-11-20 16:25:28.017694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.324 [2024-11-20 16:25:28.017707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.324 [2024-11-20 16:25:28.017714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.324 [2024-11-20 16:25:28.017720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.324 [2024-11-20 16:25:28.017734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.324 qpair failed and we were unable to recover it. 00:30:52.324 [2024-11-20 16:25:28.027648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.324 [2024-11-20 16:25:28.027704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.324 [2024-11-20 16:25:28.027716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.324 [2024-11-20 16:25:28.027723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.324 [2024-11-20 16:25:28.027731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.324 [2024-11-20 16:25:28.027745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.324 qpair failed and we were unable to recover it. 00:30:52.324 [2024-11-20 16:25:28.037651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.324 [2024-11-20 16:25:28.037712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.324 [2024-11-20 16:25:28.037726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.324 [2024-11-20 16:25:28.037733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.324 [2024-11-20 16:25:28.037739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.324 [2024-11-20 16:25:28.037753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.324 qpair failed and we were unable to recover it. 00:30:52.324 [2024-11-20 16:25:28.047681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.324 [2024-11-20 16:25:28.047730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.324 [2024-11-20 16:25:28.047743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.324 [2024-11-20 16:25:28.047753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.324 [2024-11-20 16:25:28.047759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.324 [2024-11-20 16:25:28.047774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.324 qpair failed and we were unable to recover it. 00:30:52.324 [2024-11-20 16:25:28.057633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.324 [2024-11-20 16:25:28.057679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.324 [2024-11-20 16:25:28.057692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.324 [2024-11-20 16:25:28.057698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.324 [2024-11-20 16:25:28.057705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.324 [2024-11-20 16:25:28.057718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.324 qpair failed and we were unable to recover it. 00:30:52.324 [2024-11-20 16:25:28.067756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.324 [2024-11-20 16:25:28.067813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.324 [2024-11-20 16:25:28.067826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.324 [2024-11-20 16:25:28.067833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.324 [2024-11-20 16:25:28.067839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.324 [2024-11-20 16:25:28.067852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.324 qpair failed and we were unable to recover it. 00:30:52.324 [2024-11-20 16:25:28.077743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.324 [2024-11-20 16:25:28.077798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.324 [2024-11-20 16:25:28.077811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.324 [2024-11-20 16:25:28.077818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.324 [2024-11-20 16:25:28.077824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.324 [2024-11-20 16:25:28.077838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.324 qpair failed and we were unable to recover it. 00:30:52.325 [2024-11-20 16:25:28.087808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.325 [2024-11-20 16:25:28.087858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.325 [2024-11-20 16:25:28.087871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.325 [2024-11-20 16:25:28.087878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.325 [2024-11-20 16:25:28.087884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.325 [2024-11-20 16:25:28.087901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.325 qpair failed and we were unable to recover it. 00:30:52.325 [2024-11-20 16:25:28.097791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.325 [2024-11-20 16:25:28.097856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.325 [2024-11-20 16:25:28.097870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.325 [2024-11-20 16:25:28.097877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.325 [2024-11-20 16:25:28.097884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.325 [2024-11-20 16:25:28.097899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.325 qpair failed and we were unable to recover it. 00:30:52.325 [2024-11-20 16:25:28.107744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.325 [2024-11-20 16:25:28.107801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.325 [2024-11-20 16:25:28.107814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.325 [2024-11-20 16:25:28.107821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.325 [2024-11-20 16:25:28.107827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.325 [2024-11-20 16:25:28.107841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.325 qpair failed and we were unable to recover it. 00:30:52.325 [2024-11-20 16:25:28.117910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.325 [2024-11-20 16:25:28.118003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.325 [2024-11-20 16:25:28.118016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.325 [2024-11-20 16:25:28.118023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.325 [2024-11-20 16:25:28.118029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.325 [2024-11-20 16:25:28.118043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.325 qpair failed and we were unable to recover it. 00:30:52.325 [2024-11-20 16:25:28.127921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.325 [2024-11-20 16:25:28.127980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.325 [2024-11-20 16:25:28.128005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.325 [2024-11-20 16:25:28.128014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.325 [2024-11-20 16:25:28.128021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.325 [2024-11-20 16:25:28.128041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.325 qpair failed and we were unable to recover it. 00:30:52.325 [2024-11-20 16:25:28.137913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.325 [2024-11-20 16:25:28.137977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.325 [2024-11-20 16:25:28.137992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.325 [2024-11-20 16:25:28.138000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.325 [2024-11-20 16:25:28.138006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.325 [2024-11-20 16:25:28.138022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.325 qpair failed and we were unable to recover it. 00:30:52.325 [2024-11-20 16:25:28.147975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.325 [2024-11-20 16:25:28.148074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.325 [2024-11-20 16:25:28.148090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.325 [2024-11-20 16:25:28.148097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.325 [2024-11-20 16:25:28.148104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.325 [2024-11-20 16:25:28.148123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.325 qpair failed and we were unable to recover it. 00:30:52.325 [2024-11-20 16:25:28.157903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.325 [2024-11-20 16:25:28.158006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.325 [2024-11-20 16:25:28.158020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.325 [2024-11-20 16:25:28.158027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.325 [2024-11-20 16:25:28.158034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.325 [2024-11-20 16:25:28.158048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.325 qpair failed and we were unable to recover it. 00:30:52.325 [2024-11-20 16:25:28.168042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.325 [2024-11-20 16:25:28.168099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.325 [2024-11-20 16:25:28.168112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.325 [2024-11-20 16:25:28.168119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.325 [2024-11-20 16:25:28.168125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.325 [2024-11-20 16:25:28.168139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.325 qpair failed and we were unable to recover it. 00:30:52.325 [2024-11-20 16:25:28.178000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.325 [2024-11-20 16:25:28.178046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.325 [2024-11-20 16:25:28.178063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.325 [2024-11-20 16:25:28.178070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.325 [2024-11-20 16:25:28.178077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.325 [2024-11-20 16:25:28.178091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.325 qpair failed and we were unable to recover it. 00:30:52.325 [2024-11-20 16:25:28.187983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.325 [2024-11-20 16:25:28.188076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.325 [2024-11-20 16:25:28.188090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.325 [2024-11-20 16:25:28.188097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.325 [2024-11-20 16:25:28.188104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.325 [2024-11-20 16:25:28.188119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.325 qpair failed and we were unable to recover it. 00:30:52.325 [2024-11-20 16:25:28.198145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.325 [2024-11-20 16:25:28.198205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.325 [2024-11-20 16:25:28.198219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.325 [2024-11-20 16:25:28.198226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.325 [2024-11-20 16:25:28.198232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.325 [2024-11-20 16:25:28.198247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.325 qpair failed and we were unable to recover it. 00:30:52.325 [2024-11-20 16:25:28.208108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.325 [2024-11-20 16:25:28.208163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.325 [2024-11-20 16:25:28.208176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.325 [2024-11-20 16:25:28.208184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.325 [2024-11-20 16:25:28.208190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.325 [2024-11-20 16:25:28.208204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.325 qpair failed and we were unable to recover it. 00:30:52.325 [2024-11-20 16:25:28.218038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.326 [2024-11-20 16:25:28.218084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.326 [2024-11-20 16:25:28.218097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.326 [2024-11-20 16:25:28.218104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.326 [2024-11-20 16:25:28.218111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.326 [2024-11-20 16:25:28.218135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.326 qpair failed and we were unable to recover it. 00:30:52.326 [2024-11-20 16:25:28.228207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.326 [2024-11-20 16:25:28.228265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.326 [2024-11-20 16:25:28.228278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.326 [2024-11-20 16:25:28.228285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.326 [2024-11-20 16:25:28.228292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.326 [2024-11-20 16:25:28.228307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.326 qpair failed and we were unable to recover it. 00:30:52.326 [2024-11-20 16:25:28.238253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.326 [2024-11-20 16:25:28.238305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.326 [2024-11-20 16:25:28.238318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.326 [2024-11-20 16:25:28.238325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.326 [2024-11-20 16:25:28.238332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.326 [2024-11-20 16:25:28.238346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.326 qpair failed and we were unable to recover it. 00:30:52.326 [2024-11-20 16:25:28.248253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.326 [2024-11-20 16:25:28.248303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.326 [2024-11-20 16:25:28.248316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.326 [2024-11-20 16:25:28.248323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.326 [2024-11-20 16:25:28.248329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.326 [2024-11-20 16:25:28.248344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.326 qpair failed and we were unable to recover it. 00:30:52.588 [2024-11-20 16:25:28.258240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.588 [2024-11-20 16:25:28.258287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.588 [2024-11-20 16:25:28.258300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.588 [2024-11-20 16:25:28.258307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.588 [2024-11-20 16:25:28.258313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.588 [2024-11-20 16:25:28.258328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.588 qpair failed and we were unable to recover it. 00:30:52.588 [2024-11-20 16:25:28.268309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.588 [2024-11-20 16:25:28.268362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.588 [2024-11-20 16:25:28.268374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.588 [2024-11-20 16:25:28.268381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.588 [2024-11-20 16:25:28.268388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.588 [2024-11-20 16:25:28.268402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.588 qpair failed and we were unable to recover it. 00:30:52.588 [2024-11-20 16:25:28.278382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.588 [2024-11-20 16:25:28.278441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.588 [2024-11-20 16:25:28.278454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.588 [2024-11-20 16:25:28.278460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.588 [2024-11-20 16:25:28.278467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.588 [2024-11-20 16:25:28.278481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.588 qpair failed and we were unable to recover it. 00:30:52.588 [2024-11-20 16:25:28.288363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.588 [2024-11-20 16:25:28.288473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.588 [2024-11-20 16:25:28.288486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.588 [2024-11-20 16:25:28.288493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.588 [2024-11-20 16:25:28.288499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.588 [2024-11-20 16:25:28.288513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.588 qpair failed and we were unable to recover it. 00:30:52.588 [2024-11-20 16:25:28.298282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.588 [2024-11-20 16:25:28.298328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.588 [2024-11-20 16:25:28.298342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.588 [2024-11-20 16:25:28.298349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.588 [2024-11-20 16:25:28.298355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.588 [2024-11-20 16:25:28.298374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.588 qpair failed and we were unable to recover it. 00:30:52.588 [2024-11-20 16:25:28.308461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.588 [2024-11-20 16:25:28.308520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.588 [2024-11-20 16:25:28.308536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.588 [2024-11-20 16:25:28.308543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.588 [2024-11-20 16:25:28.308549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.588 [2024-11-20 16:25:28.308564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.588 qpair failed and we were unable to recover it. 00:30:52.588 [2024-11-20 16:25:28.318444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.588 [2024-11-20 16:25:28.318496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.589 [2024-11-20 16:25:28.318509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.589 [2024-11-20 16:25:28.318516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.589 [2024-11-20 16:25:28.318523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.589 [2024-11-20 16:25:28.318537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.589 qpair failed and we were unable to recover it. 00:30:52.589 [2024-11-20 16:25:28.328465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.589 [2024-11-20 16:25:28.328520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.589 [2024-11-20 16:25:28.328533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.589 [2024-11-20 16:25:28.328540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.589 [2024-11-20 16:25:28.328546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.589 [2024-11-20 16:25:28.328561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.589 qpair failed and we were unable to recover it. 00:30:52.589 [2024-11-20 16:25:28.338507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.589 [2024-11-20 16:25:28.338556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.589 [2024-11-20 16:25:28.338569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.589 [2024-11-20 16:25:28.338576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.589 [2024-11-20 16:25:28.338582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.589 [2024-11-20 16:25:28.338597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.589 qpair failed and we were unable to recover it. 00:30:52.589 [2024-11-20 16:25:28.348550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.589 [2024-11-20 16:25:28.348602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.589 [2024-11-20 16:25:28.348615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.589 [2024-11-20 16:25:28.348622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.589 [2024-11-20 16:25:28.348631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.589 [2024-11-20 16:25:28.348646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.589 qpair failed and we were unable to recover it. 00:30:52.589 [2024-11-20 16:25:28.358572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.589 [2024-11-20 16:25:28.358627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.589 [2024-11-20 16:25:28.358640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.589 [2024-11-20 16:25:28.358647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.589 [2024-11-20 16:25:28.358653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.589 [2024-11-20 16:25:28.358667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.589 qpair failed and we were unable to recover it. 00:30:52.589 [2024-11-20 16:25:28.368620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.589 [2024-11-20 16:25:28.368669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.589 [2024-11-20 16:25:28.368682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.589 [2024-11-20 16:25:28.368689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.589 [2024-11-20 16:25:28.368695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.589 [2024-11-20 16:25:28.368709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.589 qpair failed and we were unable to recover it. 00:30:52.589 [2024-11-20 16:25:28.378576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.589 [2024-11-20 16:25:28.378632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.589 [2024-11-20 16:25:28.378645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.589 [2024-11-20 16:25:28.378652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.589 [2024-11-20 16:25:28.378658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.589 [2024-11-20 16:25:28.378672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.589 qpair failed and we were unable to recover it. 00:30:52.589 [2024-11-20 16:25:28.388645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.589 [2024-11-20 16:25:28.388697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.589 [2024-11-20 16:25:28.388709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.589 [2024-11-20 16:25:28.388716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.589 [2024-11-20 16:25:28.388723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.589 [2024-11-20 16:25:28.388737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.589 qpair failed and we were unable to recover it. 00:30:52.589 [2024-11-20 16:25:28.398690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.589 [2024-11-20 16:25:28.398741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.589 [2024-11-20 16:25:28.398754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.589 [2024-11-20 16:25:28.398761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.589 [2024-11-20 16:25:28.398767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.589 [2024-11-20 16:25:28.398781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.589 qpair failed and we were unable to recover it. 00:30:52.589 [2024-11-20 16:25:28.408731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.589 [2024-11-20 16:25:28.408833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.589 [2024-11-20 16:25:28.408848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.589 [2024-11-20 16:25:28.408855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.589 [2024-11-20 16:25:28.408861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.589 [2024-11-20 16:25:28.408880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.589 qpair failed and we were unable to recover it. 00:30:52.589 [2024-11-20 16:25:28.418688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.589 [2024-11-20 16:25:28.418734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.589 [2024-11-20 16:25:28.418747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.589 [2024-11-20 16:25:28.418754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.589 [2024-11-20 16:25:28.418761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.589 [2024-11-20 16:25:28.418775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.589 qpair failed and we were unable to recover it. 00:30:52.589 [2024-11-20 16:25:28.428779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.589 [2024-11-20 16:25:28.428836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.589 [2024-11-20 16:25:28.428849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.589 [2024-11-20 16:25:28.428856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.589 [2024-11-20 16:25:28.428862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.589 [2024-11-20 16:25:28.428877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.589 qpair failed and we were unable to recover it. 00:30:52.589 [2024-11-20 16:25:28.438683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.589 [2024-11-20 16:25:28.438736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.589 [2024-11-20 16:25:28.438753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.589 [2024-11-20 16:25:28.438759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.589 [2024-11-20 16:25:28.438766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.589 [2024-11-20 16:25:28.438785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.589 qpair failed and we were unable to recover it. 00:30:52.589 [2024-11-20 16:25:28.448687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.589 [2024-11-20 16:25:28.448738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.590 [2024-11-20 16:25:28.448751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.590 [2024-11-20 16:25:28.448758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.590 [2024-11-20 16:25:28.448764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.590 [2024-11-20 16:25:28.448778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.590 qpair failed and we were unable to recover it. 00:30:52.590 [2024-11-20 16:25:28.458780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.590 [2024-11-20 16:25:28.458827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.590 [2024-11-20 16:25:28.458840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.590 [2024-11-20 16:25:28.458847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.590 [2024-11-20 16:25:28.458853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.590 [2024-11-20 16:25:28.458867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.590 qpair failed and we were unable to recover it. 00:30:52.590 [2024-11-20 16:25:28.468854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.590 [2024-11-20 16:25:28.468912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.590 [2024-11-20 16:25:28.468925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.590 [2024-11-20 16:25:28.468932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.590 [2024-11-20 16:25:28.468938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.590 [2024-11-20 16:25:28.468953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.590 qpair failed and we were unable to recover it. 00:30:52.590 [2024-11-20 16:25:28.478925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.590 [2024-11-20 16:25:28.478981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.590 [2024-11-20 16:25:28.478994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.590 [2024-11-20 16:25:28.479001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.590 [2024-11-20 16:25:28.479011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.590 [2024-11-20 16:25:28.479025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.590 qpair failed and we were unable to recover it. 00:30:52.590 [2024-11-20 16:25:28.488934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.590 [2024-11-20 16:25:28.488995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.590 [2024-11-20 16:25:28.489008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.590 [2024-11-20 16:25:28.489015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.590 [2024-11-20 16:25:28.489022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.590 [2024-11-20 16:25:28.489036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.590 qpair failed and we were unable to recover it. 00:30:52.590 [2024-11-20 16:25:28.498883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.590 [2024-11-20 16:25:28.498932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.590 [2024-11-20 16:25:28.498945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.590 [2024-11-20 16:25:28.498952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.590 [2024-11-20 16:25:28.498958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.590 [2024-11-20 16:25:28.498973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.590 qpair failed and we were unable to recover it. 00:30:52.590 [2024-11-20 16:25:28.508965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.590 [2024-11-20 16:25:28.509019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.590 [2024-11-20 16:25:28.509032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.590 [2024-11-20 16:25:28.509039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.590 [2024-11-20 16:25:28.509045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.590 [2024-11-20 16:25:28.509060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.590 qpair failed and we were unable to recover it. 00:30:52.590 [2024-11-20 16:25:28.519042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.590 [2024-11-20 16:25:28.519093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.590 [2024-11-20 16:25:28.519106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.590 [2024-11-20 16:25:28.519114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.590 [2024-11-20 16:25:28.519120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.590 [2024-11-20 16:25:28.519135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.590 qpair failed and we were unable to recover it. 00:30:52.851 [2024-11-20 16:25:28.529051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.851 [2024-11-20 16:25:28.529123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.851 [2024-11-20 16:25:28.529136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.851 [2024-11-20 16:25:28.529143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.851 [2024-11-20 16:25:28.529150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.851 [2024-11-20 16:25:28.529167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.851 qpair failed and we were unable to recover it. 00:30:52.851 [2024-11-20 16:25:28.539023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.851 [2024-11-20 16:25:28.539084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.851 [2024-11-20 16:25:28.539097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.851 [2024-11-20 16:25:28.539104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.851 [2024-11-20 16:25:28.539110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.851 [2024-11-20 16:25:28.539125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.851 qpair failed and we were unable to recover it. 00:30:52.851 [2024-11-20 16:25:28.549119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.851 [2024-11-20 16:25:28.549173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.851 [2024-11-20 16:25:28.549186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.851 [2024-11-20 16:25:28.549193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.851 [2024-11-20 16:25:28.549199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.851 [2024-11-20 16:25:28.549214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.851 qpair failed and we were unable to recover it. 00:30:52.851 [2024-11-20 16:25:28.559136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.851 [2024-11-20 16:25:28.559189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.851 [2024-11-20 16:25:28.559202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.851 [2024-11-20 16:25:28.559208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.851 [2024-11-20 16:25:28.559215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.851 [2024-11-20 16:25:28.559229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.851 qpair failed and we were unable to recover it. 00:30:52.851 [2024-11-20 16:25:28.569062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.851 [2024-11-20 16:25:28.569169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.851 [2024-11-20 16:25:28.569182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.851 [2024-11-20 16:25:28.569189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.851 [2024-11-20 16:25:28.569196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.851 [2024-11-20 16:25:28.569210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.851 qpair failed and we were unable to recover it. 00:30:52.851 [2024-11-20 16:25:28.579145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.851 [2024-11-20 16:25:28.579206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.851 [2024-11-20 16:25:28.579219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.851 [2024-11-20 16:25:28.579226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.851 [2024-11-20 16:25:28.579232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.851 [2024-11-20 16:25:28.579247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.851 qpair failed and we were unable to recover it. 00:30:52.851 [2024-11-20 16:25:28.589228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.851 [2024-11-20 16:25:28.589284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.851 [2024-11-20 16:25:28.589297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.851 [2024-11-20 16:25:28.589304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.851 [2024-11-20 16:25:28.589310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.851 [2024-11-20 16:25:28.589324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.851 qpair failed and we were unable to recover it. 00:30:52.851 [2024-11-20 16:25:28.599215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.851 [2024-11-20 16:25:28.599266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.851 [2024-11-20 16:25:28.599278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.851 [2024-11-20 16:25:28.599285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.851 [2024-11-20 16:25:28.599292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.851 [2024-11-20 16:25:28.599306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.851 qpair failed and we were unable to recover it. 00:30:52.851 [2024-11-20 16:25:28.609261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.851 [2024-11-20 16:25:28.609314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.851 [2024-11-20 16:25:28.609327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.851 [2024-11-20 16:25:28.609337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.851 [2024-11-20 16:25:28.609344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.851 [2024-11-20 16:25:28.609358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.851 qpair failed and we were unable to recover it. 00:30:52.851 [2024-11-20 16:25:28.619182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.851 [2024-11-20 16:25:28.619235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.851 [2024-11-20 16:25:28.619249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.851 [2024-11-20 16:25:28.619256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.851 [2024-11-20 16:25:28.619264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.851 [2024-11-20 16:25:28.619282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.851 qpair failed and we were unable to recover it. 00:30:52.851 [2024-11-20 16:25:28.629349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.851 [2024-11-20 16:25:28.629432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.851 [2024-11-20 16:25:28.629445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.851 [2024-11-20 16:25:28.629452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.851 [2024-11-20 16:25:28.629458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.851 [2024-11-20 16:25:28.629473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.851 qpair failed and we were unable to recover it. 00:30:52.851 [2024-11-20 16:25:28.639308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.851 [2024-11-20 16:25:28.639366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.851 [2024-11-20 16:25:28.639379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.851 [2024-11-20 16:25:28.639386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.851 [2024-11-20 16:25:28.639392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.851 [2024-11-20 16:25:28.639407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.851 qpair failed and we were unable to recover it. 00:30:52.851 [2024-11-20 16:25:28.649363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.851 [2024-11-20 16:25:28.649414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.851 [2024-11-20 16:25:28.649427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.851 [2024-11-20 16:25:28.649434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.851 [2024-11-20 16:25:28.649440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.851 [2024-11-20 16:25:28.649458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.851 qpair failed and we were unable to recover it. 00:30:52.851 [2024-11-20 16:25:28.659382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.851 [2024-11-20 16:25:28.659439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.851 [2024-11-20 16:25:28.659452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.851 [2024-11-20 16:25:28.659459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.851 [2024-11-20 16:25:28.659465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.851 [2024-11-20 16:25:28.659480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.851 qpair failed and we were unable to recover it. 00:30:52.851 [2024-11-20 16:25:28.669456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.851 [2024-11-20 16:25:28.669509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.851 [2024-11-20 16:25:28.669522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.851 [2024-11-20 16:25:28.669529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.851 [2024-11-20 16:25:28.669536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.851 [2024-11-20 16:25:28.669550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.851 qpair failed and we were unable to recover it. 00:30:52.851 [2024-11-20 16:25:28.679440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.851 [2024-11-20 16:25:28.679489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.851 [2024-11-20 16:25:28.679502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.851 [2024-11-20 16:25:28.679509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.851 [2024-11-20 16:25:28.679515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.851 [2024-11-20 16:25:28.679529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.851 qpair failed and we were unable to recover it. 00:30:52.851 [2024-11-20 16:25:28.689483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.851 [2024-11-20 16:25:28.689534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.851 [2024-11-20 16:25:28.689547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.851 [2024-11-20 16:25:28.689554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.851 [2024-11-20 16:25:28.689560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.851 [2024-11-20 16:25:28.689574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.851 qpair failed and we were unable to recover it. 00:30:52.851 [2024-11-20 16:25:28.699443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.851 [2024-11-20 16:25:28.699495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.851 [2024-11-20 16:25:28.699508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.851 [2024-11-20 16:25:28.699515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.851 [2024-11-20 16:25:28.699521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.851 [2024-11-20 16:25:28.699535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.852 qpair failed and we were unable to recover it. 00:30:52.852 [2024-11-20 16:25:28.709502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.852 [2024-11-20 16:25:28.709554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.852 [2024-11-20 16:25:28.709566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.852 [2024-11-20 16:25:28.709573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.852 [2024-11-20 16:25:28.709579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.852 [2024-11-20 16:25:28.709593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.852 qpair failed and we were unable to recover it. 00:30:52.852 [2024-11-20 16:25:28.719529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.852 [2024-11-20 16:25:28.719577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.852 [2024-11-20 16:25:28.719590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.852 [2024-11-20 16:25:28.719597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.852 [2024-11-20 16:25:28.719603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.852 [2024-11-20 16:25:28.719617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.852 qpair failed and we were unable to recover it. 00:30:52.852 [2024-11-20 16:25:28.729458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.852 [2024-11-20 16:25:28.729528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.852 [2024-11-20 16:25:28.729541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.852 [2024-11-20 16:25:28.729548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.852 [2024-11-20 16:25:28.729554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.852 [2024-11-20 16:25:28.729568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.852 qpair failed and we were unable to recover it. 00:30:52.852 [2024-11-20 16:25:28.739560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.852 [2024-11-20 16:25:28.739608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.852 [2024-11-20 16:25:28.739624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.852 [2024-11-20 16:25:28.739631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.852 [2024-11-20 16:25:28.739637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.852 [2024-11-20 16:25:28.739651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.852 qpair failed and we were unable to recover it. 00:30:52.852 [2024-11-20 16:25:28.749633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.852 [2024-11-20 16:25:28.749710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.852 [2024-11-20 16:25:28.749723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.852 [2024-11-20 16:25:28.749730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.852 [2024-11-20 16:25:28.749737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.852 [2024-11-20 16:25:28.749754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.852 qpair failed and we were unable to recover it. 00:30:52.852 [2024-11-20 16:25:28.759607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.852 [2024-11-20 16:25:28.759656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.852 [2024-11-20 16:25:28.759669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.852 [2024-11-20 16:25:28.759676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.852 [2024-11-20 16:25:28.759683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.852 [2024-11-20 16:25:28.759697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.852 qpair failed and we were unable to recover it. 00:30:52.852 [2024-11-20 16:25:28.769681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.852 [2024-11-20 16:25:28.769730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.852 [2024-11-20 16:25:28.769743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.852 [2024-11-20 16:25:28.769750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.852 [2024-11-20 16:25:28.769756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.852 [2024-11-20 16:25:28.769770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.852 qpair failed and we were unable to recover it. 00:30:52.852 [2024-11-20 16:25:28.779684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.852 [2024-11-20 16:25:28.779731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.852 [2024-11-20 16:25:28.779744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.852 [2024-11-20 16:25:28.779751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.852 [2024-11-20 16:25:28.779757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:52.852 [2024-11-20 16:25:28.779775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.852 qpair failed and we were unable to recover it. 00:30:53.114 [2024-11-20 16:25:28.789661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.114 [2024-11-20 16:25:28.789714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.114 [2024-11-20 16:25:28.789738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.114 [2024-11-20 16:25:28.789745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.114 [2024-11-20 16:25:28.789752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.114 [2024-11-20 16:25:28.789772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.114 qpair failed and we were unable to recover it. 00:30:53.114 [2024-11-20 16:25:28.799715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.114 [2024-11-20 16:25:28.799761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.114 [2024-11-20 16:25:28.799775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.114 [2024-11-20 16:25:28.799782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.114 [2024-11-20 16:25:28.799788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.114 [2024-11-20 16:25:28.799802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.114 qpair failed and we were unable to recover it. 00:30:53.114 [2024-11-20 16:25:28.809804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.114 [2024-11-20 16:25:28.809859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.114 [2024-11-20 16:25:28.809872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.114 [2024-11-20 16:25:28.809879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.114 [2024-11-20 16:25:28.809885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.114 [2024-11-20 16:25:28.809900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.114 qpair failed and we were unable to recover it. 00:30:53.114 [2024-11-20 16:25:28.819801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.114 [2024-11-20 16:25:28.819852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.114 [2024-11-20 16:25:28.819876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.114 [2024-11-20 16:25:28.819885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.114 [2024-11-20 16:25:28.819892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.114 [2024-11-20 16:25:28.819912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.114 qpair failed and we were unable to recover it. 00:30:53.115 [2024-11-20 16:25:28.829847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-11-20 16:25:28.829901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-11-20 16:25:28.829916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.115 [2024-11-20 16:25:28.829923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.115 [2024-11-20 16:25:28.829929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.115 [2024-11-20 16:25:28.829945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.115 [2024-11-20 16:25:28.839854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-11-20 16:25:28.839910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-11-20 16:25:28.839924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.115 [2024-11-20 16:25:28.839931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.115 [2024-11-20 16:25:28.839937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.115 [2024-11-20 16:25:28.839951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.115 [2024-11-20 16:25:28.849883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-11-20 16:25:28.849935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-11-20 16:25:28.849949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.115 [2024-11-20 16:25:28.849956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.115 [2024-11-20 16:25:28.849963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.115 [2024-11-20 16:25:28.849977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.115 [2024-11-20 16:25:28.859878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-11-20 16:25:28.859929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-11-20 16:25:28.859942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.115 [2024-11-20 16:25:28.859949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.115 [2024-11-20 16:25:28.859955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.115 [2024-11-20 16:25:28.859969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.115 [2024-11-20 16:25:28.869937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-11-20 16:25:28.869987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-11-20 16:25:28.870003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.115 [2024-11-20 16:25:28.870011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.115 [2024-11-20 16:25:28.870017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.115 [2024-11-20 16:25:28.870031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.115 [2024-11-20 16:25:28.879967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-11-20 16:25:28.880056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-11-20 16:25:28.880071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.115 [2024-11-20 16:25:28.880078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.115 [2024-11-20 16:25:28.880085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.115 [2024-11-20 16:25:28.880100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.115 [2024-11-20 16:25:28.889989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-11-20 16:25:28.890039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-11-20 16:25:28.890052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.115 [2024-11-20 16:25:28.890059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.115 [2024-11-20 16:25:28.890065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.115 [2024-11-20 16:25:28.890080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.115 [2024-11-20 16:25:28.900003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-11-20 16:25:28.900055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-11-20 16:25:28.900068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.115 [2024-11-20 16:25:28.900075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.115 [2024-11-20 16:25:28.900081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.115 [2024-11-20 16:25:28.900095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.115 [2024-11-20 16:25:28.910045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-11-20 16:25:28.910101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-11-20 16:25:28.910114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.115 [2024-11-20 16:25:28.910121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.115 [2024-11-20 16:25:28.910131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.115 [2024-11-20 16:25:28.910145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.115 [2024-11-20 16:25:28.920050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-11-20 16:25:28.920100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-11-20 16:25:28.920113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.115 [2024-11-20 16:25:28.920119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.115 [2024-11-20 16:25:28.920126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.115 [2024-11-20 16:25:28.920140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.115 [2024-11-20 16:25:28.930131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-11-20 16:25:28.930179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-11-20 16:25:28.930192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.115 [2024-11-20 16:25:28.930199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.115 [2024-11-20 16:25:28.930206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.115 [2024-11-20 16:25:28.930220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.115 [2024-11-20 16:25:28.940098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-11-20 16:25:28.940145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-11-20 16:25:28.940162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.116 [2024-11-20 16:25:28.940169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.116 [2024-11-20 16:25:28.940176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.116 [2024-11-20 16:25:28.940190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.116 qpair failed and we were unable to recover it. 00:30:53.116 [2024-11-20 16:25:28.950186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.116 [2024-11-20 16:25:28.950239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.116 [2024-11-20 16:25:28.950252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.116 [2024-11-20 16:25:28.950259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.116 [2024-11-20 16:25:28.950265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.116 [2024-11-20 16:25:28.950279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.116 qpair failed and we were unable to recover it. 00:30:53.116 [2024-11-20 16:25:28.960168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.116 [2024-11-20 16:25:28.960216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.116 [2024-11-20 16:25:28.960229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.116 [2024-11-20 16:25:28.960236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.116 [2024-11-20 16:25:28.960242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.116 [2024-11-20 16:25:28.960256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.116 qpair failed and we were unable to recover it. 00:30:53.116 [2024-11-20 16:25:28.970207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.116 [2024-11-20 16:25:28.970256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.116 [2024-11-20 16:25:28.970269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.116 [2024-11-20 16:25:28.970276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.116 [2024-11-20 16:25:28.970282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.116 [2024-11-20 16:25:28.970296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.116 qpair failed and we were unable to recover it. 00:30:53.116 [2024-11-20 16:25:28.980230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.116 [2024-11-20 16:25:28.980275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.116 [2024-11-20 16:25:28.980288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.116 [2024-11-20 16:25:28.980295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.116 [2024-11-20 16:25:28.980301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.116 [2024-11-20 16:25:28.980315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.116 qpair failed and we were unable to recover it. 00:30:53.116 [2024-11-20 16:25:28.990175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.116 [2024-11-20 16:25:28.990234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.116 [2024-11-20 16:25:28.990248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.116 [2024-11-20 16:25:28.990255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.116 [2024-11-20 16:25:28.990261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.116 [2024-11-20 16:25:28.990282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.116 qpair failed and we were unable to recover it. 00:30:53.116 [2024-11-20 16:25:29.000281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.116 [2024-11-20 16:25:29.000381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.116 [2024-11-20 16:25:29.000398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.116 [2024-11-20 16:25:29.000405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.116 [2024-11-20 16:25:29.000411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.116 [2024-11-20 16:25:29.000426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.116 qpair failed and we were unable to recover it. 00:30:53.116 [2024-11-20 16:25:29.010339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.116 [2024-11-20 16:25:29.010390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.116 [2024-11-20 16:25:29.010403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.116 [2024-11-20 16:25:29.010410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.116 [2024-11-20 16:25:29.010416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.116 [2024-11-20 16:25:29.010430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.116 qpair failed and we were unable to recover it. 00:30:53.116 [2024-11-20 16:25:29.020343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.116 [2024-11-20 16:25:29.020393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.116 [2024-11-20 16:25:29.020406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.116 [2024-11-20 16:25:29.020413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.116 [2024-11-20 16:25:29.020419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.116 [2024-11-20 16:25:29.020434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.116 qpair failed and we were unable to recover it. 00:30:53.116 [2024-11-20 16:25:29.030412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.116 [2024-11-20 16:25:29.030465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.116 [2024-11-20 16:25:29.030478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.116 [2024-11-20 16:25:29.030485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.116 [2024-11-20 16:25:29.030491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.116 [2024-11-20 16:25:29.030505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.116 qpair failed and we were unable to recover it. 00:30:53.116 [2024-11-20 16:25:29.040393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.116 [2024-11-20 16:25:29.040442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.116 [2024-11-20 16:25:29.040455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.116 [2024-11-20 16:25:29.040465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.116 [2024-11-20 16:25:29.040472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.116 [2024-11-20 16:25:29.040486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.116 qpair failed and we were unable to recover it. 00:30:53.379 [2024-11-20 16:25:29.050462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.379 [2024-11-20 16:25:29.050518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.379 [2024-11-20 16:25:29.050531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.379 [2024-11-20 16:25:29.050538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.379 [2024-11-20 16:25:29.050544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.379 [2024-11-20 16:25:29.050558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.379 qpair failed and we were unable to recover it. 00:30:53.379 [2024-11-20 16:25:29.060440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.379 [2024-11-20 16:25:29.060485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.379 [2024-11-20 16:25:29.060498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.379 [2024-11-20 16:25:29.060505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.379 [2024-11-20 16:25:29.060511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.379 [2024-11-20 16:25:29.060525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.379 qpair failed and we were unable to recover it. 00:30:53.380 [2024-11-20 16:25:29.070512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-11-20 16:25:29.070568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-11-20 16:25:29.070581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-11-20 16:25:29.070588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-11-20 16:25:29.070594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.380 [2024-11-20 16:25:29.070608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-11-20 16:25:29.080519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-11-20 16:25:29.080616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-11-20 16:25:29.080628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-11-20 16:25:29.080635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-11-20 16:25:29.080641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.380 [2024-11-20 16:25:29.080656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-11-20 16:25:29.090546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-11-20 16:25:29.090622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-11-20 16:25:29.090635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-11-20 16:25:29.090642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-11-20 16:25:29.090648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.380 [2024-11-20 16:25:29.090664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-11-20 16:25:29.100561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-11-20 16:25:29.100632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-11-20 16:25:29.100646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-11-20 16:25:29.100654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-11-20 16:25:29.100662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.380 [2024-11-20 16:25:29.100679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-11-20 16:25:29.110608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-11-20 16:25:29.110665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-11-20 16:25:29.110678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-11-20 16:25:29.110685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-11-20 16:25:29.110692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.380 [2024-11-20 16:25:29.110706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-11-20 16:25:29.120495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-11-20 16:25:29.120552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-11-20 16:25:29.120566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-11-20 16:25:29.120573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-11-20 16:25:29.120580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.380 [2024-11-20 16:25:29.120595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-11-20 16:25:29.130671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-11-20 16:25:29.130726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-11-20 16:25:29.130740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-11-20 16:25:29.130747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-11-20 16:25:29.130753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.380 [2024-11-20 16:25:29.130767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-11-20 16:25:29.140626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-11-20 16:25:29.140669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-11-20 16:25:29.140682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-11-20 16:25:29.140689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-11-20 16:25:29.140695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.380 [2024-11-20 16:25:29.140709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-11-20 16:25:29.150733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-11-20 16:25:29.150788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-11-20 16:25:29.150800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-11-20 16:25:29.150807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-11-20 16:25:29.150813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.380 [2024-11-20 16:25:29.150828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-11-20 16:25:29.160711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-11-20 16:25:29.160763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-11-20 16:25:29.160776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-11-20 16:25:29.160783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-11-20 16:25:29.160790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.380 [2024-11-20 16:25:29.160804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-11-20 16:25:29.170777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-11-20 16:25:29.170830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-11-20 16:25:29.170842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-11-20 16:25:29.170856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-11-20 16:25:29.170862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.380 [2024-11-20 16:25:29.170877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-11-20 16:25:29.180760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-11-20 16:25:29.180806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-11-20 16:25:29.180819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-11-20 16:25:29.180827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-11-20 16:25:29.180833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.381 [2024-11-20 16:25:29.180848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-11-20 16:25:29.190711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-11-20 16:25:29.190778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-11-20 16:25:29.190791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-11-20 16:25:29.190799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-11-20 16:25:29.190805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.381 [2024-11-20 16:25:29.190819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-11-20 16:25:29.200842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-11-20 16:25:29.200893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-11-20 16:25:29.200906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-11-20 16:25:29.200913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-11-20 16:25:29.200919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.381 [2024-11-20 16:25:29.200933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-11-20 16:25:29.210857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-11-20 16:25:29.210922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-11-20 16:25:29.210935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-11-20 16:25:29.210942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-11-20 16:25:29.210948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.381 [2024-11-20 16:25:29.210966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-11-20 16:25:29.220869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-11-20 16:25:29.220917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-11-20 16:25:29.220930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-11-20 16:25:29.220937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-11-20 16:25:29.220943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.381 [2024-11-20 16:25:29.220958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-11-20 16:25:29.230953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-11-20 16:25:29.231016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-11-20 16:25:29.231040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-11-20 16:25:29.231048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-11-20 16:25:29.231055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.381 [2024-11-20 16:25:29.231075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-11-20 16:25:29.240945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-11-20 16:25:29.240995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-11-20 16:25:29.241010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-11-20 16:25:29.241018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-11-20 16:25:29.241025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.381 [2024-11-20 16:25:29.241041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-11-20 16:25:29.250986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-11-20 16:25:29.251039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-11-20 16:25:29.251052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-11-20 16:25:29.251059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-11-20 16:25:29.251065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.381 [2024-11-20 16:25:29.251080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-11-20 16:25:29.260939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-11-20 16:25:29.260994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-11-20 16:25:29.261008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-11-20 16:25:29.261015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-11-20 16:25:29.261021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.381 [2024-11-20 16:25:29.261036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-11-20 16:25:29.270918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-11-20 16:25:29.270976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-11-20 16:25:29.270991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-11-20 16:25:29.271000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-11-20 16:25:29.271009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.381 [2024-11-20 16:25:29.271034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-11-20 16:25:29.281047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-11-20 16:25:29.281100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-11-20 16:25:29.281113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-11-20 16:25:29.281120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-11-20 16:25:29.281127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.381 [2024-11-20 16:25:29.281141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-11-20 16:25:29.291087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-11-20 16:25:29.291134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-11-20 16:25:29.291147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-11-20 16:25:29.291154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.382 [2024-11-20 16:25:29.291165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.382 [2024-11-20 16:25:29.291179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.382 qpair failed and we were unable to recover it. 00:30:53.382 [2024-11-20 16:25:29.301028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.382 [2024-11-20 16:25:29.301076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.382 [2024-11-20 16:25:29.301093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.382 [2024-11-20 16:25:29.301100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.382 [2024-11-20 16:25:29.301107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.382 [2024-11-20 16:25:29.301121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.382 qpair failed and we were unable to recover it. 00:30:53.382 [2024-11-20 16:25:29.311154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.382 [2024-11-20 16:25:29.311212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.382 [2024-11-20 16:25:29.311225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.382 [2024-11-20 16:25:29.311232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.382 [2024-11-20 16:25:29.311238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.382 [2024-11-20 16:25:29.311253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.382 qpair failed and we were unable to recover it. 00:30:53.644 [2024-11-20 16:25:29.321133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.644 [2024-11-20 16:25:29.321227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.644 [2024-11-20 16:25:29.321241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.644 [2024-11-20 16:25:29.321248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.644 [2024-11-20 16:25:29.321254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.644 [2024-11-20 16:25:29.321269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-11-20 16:25:29.331177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.644 [2024-11-20 16:25:29.331227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.644 [2024-11-20 16:25:29.331241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.644 [2024-11-20 16:25:29.331248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.644 [2024-11-20 16:25:29.331254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.644 [2024-11-20 16:25:29.331269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-11-20 16:25:29.341191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.644 [2024-11-20 16:25:29.341289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.644 [2024-11-20 16:25:29.341302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.644 [2024-11-20 16:25:29.341309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.644 [2024-11-20 16:25:29.341319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.644 [2024-11-20 16:25:29.341333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-11-20 16:25:29.351257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.644 [2024-11-20 16:25:29.351313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.644 [2024-11-20 16:25:29.351326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.644 [2024-11-20 16:25:29.351333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.644 [2024-11-20 16:25:29.351339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.644 [2024-11-20 16:25:29.351354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-11-20 16:25:29.361229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.644 [2024-11-20 16:25:29.361280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.644 [2024-11-20 16:25:29.361293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.644 [2024-11-20 16:25:29.361300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.644 [2024-11-20 16:25:29.361306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.644 [2024-11-20 16:25:29.361320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-11-20 16:25:29.371305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.644 [2024-11-20 16:25:29.371352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.644 [2024-11-20 16:25:29.371365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.644 [2024-11-20 16:25:29.371373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.644 [2024-11-20 16:25:29.371379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.644 [2024-11-20 16:25:29.371394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.645 [2024-11-20 16:25:29.381299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.645 [2024-11-20 16:25:29.381344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.645 [2024-11-20 16:25:29.381357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.645 [2024-11-20 16:25:29.381364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.645 [2024-11-20 16:25:29.381370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.645 [2024-11-20 16:25:29.381384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-11-20 16:25:29.391356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.645 [2024-11-20 16:25:29.391406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.645 [2024-11-20 16:25:29.391419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.645 [2024-11-20 16:25:29.391426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.645 [2024-11-20 16:25:29.391432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.645 [2024-11-20 16:25:29.391446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-11-20 16:25:29.401256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.645 [2024-11-20 16:25:29.401309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.645 [2024-11-20 16:25:29.401322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.645 [2024-11-20 16:25:29.401329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.645 [2024-11-20 16:25:29.401335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.645 [2024-11-20 16:25:29.401349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-11-20 16:25:29.411389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.645 [2024-11-20 16:25:29.411466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.645 [2024-11-20 16:25:29.411479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.645 [2024-11-20 16:25:29.411486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.645 [2024-11-20 16:25:29.411492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.645 [2024-11-20 16:25:29.411506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-11-20 16:25:29.421404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.645 [2024-11-20 16:25:29.421455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.645 [2024-11-20 16:25:29.421467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.645 [2024-11-20 16:25:29.421474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.645 [2024-11-20 16:25:29.421480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.645 [2024-11-20 16:25:29.421495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-11-20 16:25:29.431474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.645 [2024-11-20 16:25:29.431530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.645 [2024-11-20 16:25:29.431547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.645 [2024-11-20 16:25:29.431554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.645 [2024-11-20 16:25:29.431560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.645 [2024-11-20 16:25:29.431574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-11-20 16:25:29.441510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.645 [2024-11-20 16:25:29.441560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.645 [2024-11-20 16:25:29.441573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.645 [2024-11-20 16:25:29.441580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.645 [2024-11-20 16:25:29.441586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.645 [2024-11-20 16:25:29.441600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-11-20 16:25:29.451541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.645 [2024-11-20 16:25:29.451595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.645 [2024-11-20 16:25:29.451608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.645 [2024-11-20 16:25:29.451615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.645 [2024-11-20 16:25:29.451621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.645 [2024-11-20 16:25:29.451635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-11-20 16:25:29.461542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.645 [2024-11-20 16:25:29.461596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.645 [2024-11-20 16:25:29.461609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.645 [2024-11-20 16:25:29.461616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.645 [2024-11-20 16:25:29.461622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.645 [2024-11-20 16:25:29.461636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-11-20 16:25:29.471623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.645 [2024-11-20 16:25:29.471675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.645 [2024-11-20 16:25:29.471688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.645 [2024-11-20 16:25:29.471695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.645 [2024-11-20 16:25:29.471704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.645 [2024-11-20 16:25:29.471719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-11-20 16:25:29.481587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.645 [2024-11-20 16:25:29.481705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.645 [2024-11-20 16:25:29.481718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.645 [2024-11-20 16:25:29.481725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.645 [2024-11-20 16:25:29.481732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.645 [2024-11-20 16:25:29.481746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-11-20 16:25:29.491635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.645 [2024-11-20 16:25:29.491684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.645 [2024-11-20 16:25:29.491697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.645 [2024-11-20 16:25:29.491703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.645 [2024-11-20 16:25:29.491710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.645 [2024-11-20 16:25:29.491724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-11-20 16:25:29.501606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.645 [2024-11-20 16:25:29.501654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.645 [2024-11-20 16:25:29.501667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.645 [2024-11-20 16:25:29.501673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.645 [2024-11-20 16:25:29.501680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.645 [2024-11-20 16:25:29.501694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-11-20 16:25:29.511688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.646 [2024-11-20 16:25:29.511744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.646 [2024-11-20 16:25:29.511756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.646 [2024-11-20 16:25:29.511763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.646 [2024-11-20 16:25:29.511769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.646 [2024-11-20 16:25:29.511784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-11-20 16:25:29.521564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.646 [2024-11-20 16:25:29.521613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.646 [2024-11-20 16:25:29.521626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.646 [2024-11-20 16:25:29.521634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.646 [2024-11-20 16:25:29.521640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.646 [2024-11-20 16:25:29.521659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-11-20 16:25:29.531746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.646 [2024-11-20 16:25:29.531798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.646 [2024-11-20 16:25:29.531812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.646 [2024-11-20 16:25:29.531819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.646 [2024-11-20 16:25:29.531825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.646 [2024-11-20 16:25:29.531839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-11-20 16:25:29.541724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.646 [2024-11-20 16:25:29.541768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.646 [2024-11-20 16:25:29.541781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.646 [2024-11-20 16:25:29.541788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.646 [2024-11-20 16:25:29.541795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.646 [2024-11-20 16:25:29.541809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-11-20 16:25:29.551678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.646 [2024-11-20 16:25:29.551736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.646 [2024-11-20 16:25:29.551749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.646 [2024-11-20 16:25:29.551756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.646 [2024-11-20 16:25:29.551762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.646 [2024-11-20 16:25:29.551777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-11-20 16:25:29.561770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.646 [2024-11-20 16:25:29.561821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.646 [2024-11-20 16:25:29.561838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.646 [2024-11-20 16:25:29.561845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.646 [2024-11-20 16:25:29.561851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.646 [2024-11-20 16:25:29.561866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-11-20 16:25:29.571864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.646 [2024-11-20 16:25:29.571912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.646 [2024-11-20 16:25:29.571925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.646 [2024-11-20 16:25:29.571932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.646 [2024-11-20 16:25:29.571938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.646 [2024-11-20 16:25:29.571952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.908 [2024-11-20 16:25:29.581846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.908 [2024-11-20 16:25:29.581899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.908 [2024-11-20 16:25:29.581924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.908 [2024-11-20 16:25:29.581933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.908 [2024-11-20 16:25:29.581940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.908 [2024-11-20 16:25:29.581960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.908 qpair failed and we were unable to recover it. 00:30:53.908 [2024-11-20 16:25:29.591908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.908 [2024-11-20 16:25:29.591989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.908 [2024-11-20 16:25:29.592005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.908 [2024-11-20 16:25:29.592013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.908 [2024-11-20 16:25:29.592019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.908 [2024-11-20 16:25:29.592035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.908 qpair failed and we were unable to recover it. 00:30:53.908 [2024-11-20 16:25:29.601897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.908 [2024-11-20 16:25:29.601958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.908 [2024-11-20 16:25:29.601972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.908 [2024-11-20 16:25:29.601983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.908 [2024-11-20 16:25:29.601991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.908 [2024-11-20 16:25:29.602005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.908 qpair failed and we were unable to recover it. 00:30:53.908 [2024-11-20 16:25:29.611959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.908 [2024-11-20 16:25:29.612011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.908 [2024-11-20 16:25:29.612024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.908 [2024-11-20 16:25:29.612031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.908 [2024-11-20 16:25:29.612037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.908 [2024-11-20 16:25:29.612052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.908 qpair failed and we were unable to recover it. 00:30:53.908 [2024-11-20 16:25:29.621927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.908 [2024-11-20 16:25:29.621974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.908 [2024-11-20 16:25:29.621987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.908 [2024-11-20 16:25:29.621994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.908 [2024-11-20 16:25:29.622001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.908 [2024-11-20 16:25:29.622015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.908 qpair failed and we were unable to recover it. 00:30:53.908 [2024-11-20 16:25:29.632026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.908 [2024-11-20 16:25:29.632080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.908 [2024-11-20 16:25:29.632094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.908 [2024-11-20 16:25:29.632101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.908 [2024-11-20 16:25:29.632107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.908 [2024-11-20 16:25:29.632121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.908 qpair failed and we were unable to recover it. 00:30:53.908 [2024-11-20 16:25:29.642021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.908 [2024-11-20 16:25:29.642112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.908 [2024-11-20 16:25:29.642125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.908 [2024-11-20 16:25:29.642132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.908 [2024-11-20 16:25:29.642139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.908 [2024-11-20 16:25:29.642153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.908 qpair failed and we were unable to recover it. 00:30:53.908 [2024-11-20 16:25:29.652041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.908 [2024-11-20 16:25:29.652093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.908 [2024-11-20 16:25:29.652106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.909 [2024-11-20 16:25:29.652113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.909 [2024-11-20 16:25:29.652119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.909 [2024-11-20 16:25:29.652133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.909 qpair failed and we were unable to recover it. 00:30:53.909 [2024-11-20 16:25:29.662030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.909 [2024-11-20 16:25:29.662078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.909 [2024-11-20 16:25:29.662091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.909 [2024-11-20 16:25:29.662098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.909 [2024-11-20 16:25:29.662104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.909 [2024-11-20 16:25:29.662119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.909 qpair failed and we were unable to recover it. 00:30:53.909 [2024-11-20 16:25:29.672138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.909 [2024-11-20 16:25:29.672198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.909 [2024-11-20 16:25:29.672211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.909 [2024-11-20 16:25:29.672218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.909 [2024-11-20 16:25:29.672225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.909 [2024-11-20 16:25:29.672239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.909 qpair failed and we were unable to recover it. 00:30:53.909 [2024-11-20 16:25:29.682137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.909 [2024-11-20 16:25:29.682190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.909 [2024-11-20 16:25:29.682203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.909 [2024-11-20 16:25:29.682210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.909 [2024-11-20 16:25:29.682216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.909 [2024-11-20 16:25:29.682231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.909 qpair failed and we were unable to recover it. 00:30:53.909 [2024-11-20 16:25:29.692191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.909 [2024-11-20 16:25:29.692273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.909 [2024-11-20 16:25:29.692286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.909 [2024-11-20 16:25:29.692293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.909 [2024-11-20 16:25:29.692300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.909 [2024-11-20 16:25:29.692315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.909 qpair failed and we were unable to recover it. 00:30:53.909 [2024-11-20 16:25:29.702186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.909 [2024-11-20 16:25:29.702283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.909 [2024-11-20 16:25:29.702297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.909 [2024-11-20 16:25:29.702304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.909 [2024-11-20 16:25:29.702310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.909 [2024-11-20 16:25:29.702324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.909 qpair failed and we were unable to recover it. 00:30:53.909 [2024-11-20 16:25:29.712253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.909 [2024-11-20 16:25:29.712308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.909 [2024-11-20 16:25:29.712321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.909 [2024-11-20 16:25:29.712328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.909 [2024-11-20 16:25:29.712334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.909 [2024-11-20 16:25:29.712348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.909 qpair failed and we were unable to recover it. 00:30:53.909 [2024-11-20 16:25:29.722245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.909 [2024-11-20 16:25:29.722296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.909 [2024-11-20 16:25:29.722309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.909 [2024-11-20 16:25:29.722317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.909 [2024-11-20 16:25:29.722323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.909 [2024-11-20 16:25:29.722337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.909 qpair failed and we were unable to recover it. 00:30:53.909 [2024-11-20 16:25:29.732304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.909 [2024-11-20 16:25:29.732363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.909 [2024-11-20 16:25:29.732375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.909 [2024-11-20 16:25:29.732387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.909 [2024-11-20 16:25:29.732394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.909 [2024-11-20 16:25:29.732408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.909 qpair failed and we were unable to recover it. 00:30:53.909 [2024-11-20 16:25:29.742269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.909 [2024-11-20 16:25:29.742315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.909 [2024-11-20 16:25:29.742328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.909 [2024-11-20 16:25:29.742335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.909 [2024-11-20 16:25:29.742341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.909 [2024-11-20 16:25:29.742355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.909 qpair failed and we were unable to recover it. 00:30:53.909 [2024-11-20 16:25:29.752353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.909 [2024-11-20 16:25:29.752406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.909 [2024-11-20 16:25:29.752419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.909 [2024-11-20 16:25:29.752426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.909 [2024-11-20 16:25:29.752432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.909 [2024-11-20 16:25:29.752447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.909 qpair failed and we were unable to recover it. 00:30:53.909 [2024-11-20 16:25:29.762332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.909 [2024-11-20 16:25:29.762386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.909 [2024-11-20 16:25:29.762398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.909 [2024-11-20 16:25:29.762405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.909 [2024-11-20 16:25:29.762411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.909 [2024-11-20 16:25:29.762426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.909 qpair failed and we were unable to recover it. 00:30:53.909 [2024-11-20 16:25:29.772452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.909 [2024-11-20 16:25:29.772503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.909 [2024-11-20 16:25:29.772516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.909 [2024-11-20 16:25:29.772523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.909 [2024-11-20 16:25:29.772529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.909 [2024-11-20 16:25:29.772548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.909 qpair failed and we were unable to recover it. 00:30:53.909 [2024-11-20 16:25:29.782390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.909 [2024-11-20 16:25:29.782444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.909 [2024-11-20 16:25:29.782458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.910 [2024-11-20 16:25:29.782465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.910 [2024-11-20 16:25:29.782471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.910 [2024-11-20 16:25:29.782485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.910 qpair failed and we were unable to recover it. 00:30:53.910 [2024-11-20 16:25:29.792453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.910 [2024-11-20 16:25:29.792508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.910 [2024-11-20 16:25:29.792520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.910 [2024-11-20 16:25:29.792527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.910 [2024-11-20 16:25:29.792534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.910 [2024-11-20 16:25:29.792548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.910 qpair failed and we were unable to recover it. 00:30:53.910 [2024-11-20 16:25:29.802469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.910 [2024-11-20 16:25:29.802520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.910 [2024-11-20 16:25:29.802533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.910 [2024-11-20 16:25:29.802540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.910 [2024-11-20 16:25:29.802546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.910 [2024-11-20 16:25:29.802560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.910 qpair failed and we were unable to recover it. 00:30:53.910 [2024-11-20 16:25:29.812517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.910 [2024-11-20 16:25:29.812574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.910 [2024-11-20 16:25:29.812587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.910 [2024-11-20 16:25:29.812594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.910 [2024-11-20 16:25:29.812601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.910 [2024-11-20 16:25:29.812615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.910 qpair failed and we were unable to recover it. 00:30:53.910 [2024-11-20 16:25:29.822394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.910 [2024-11-20 16:25:29.822451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.910 [2024-11-20 16:25:29.822465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.910 [2024-11-20 16:25:29.822472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.910 [2024-11-20 16:25:29.822479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.910 [2024-11-20 16:25:29.822500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.910 qpair failed and we were unable to recover it. 00:30:53.910 [2024-11-20 16:25:29.832566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.910 [2024-11-20 16:25:29.832648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.910 [2024-11-20 16:25:29.832662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.910 [2024-11-20 16:25:29.832669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.910 [2024-11-20 16:25:29.832676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:53.910 [2024-11-20 16:25:29.832694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.910 qpair failed and we were unable to recover it. 00:30:54.172 [2024-11-20 16:25:29.842568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.172 [2024-11-20 16:25:29.842617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.172 [2024-11-20 16:25:29.842631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.172 [2024-11-20 16:25:29.842639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.172 [2024-11-20 16:25:29.842645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.172 [2024-11-20 16:25:29.842660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.172 qpair failed and we were unable to recover it. 00:30:54.172 [2024-11-20 16:25:29.852602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.172 [2024-11-20 16:25:29.852654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.172 [2024-11-20 16:25:29.852667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.172 [2024-11-20 16:25:29.852674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.172 [2024-11-20 16:25:29.852681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.172 [2024-11-20 16:25:29.852696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.172 qpair failed and we were unable to recover it. 00:30:54.172 [2024-11-20 16:25:29.862608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.172 [2024-11-20 16:25:29.862660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.172 [2024-11-20 16:25:29.862676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.172 [2024-11-20 16:25:29.862683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.172 [2024-11-20 16:25:29.862689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.172 [2024-11-20 16:25:29.862704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.172 qpair failed and we were unable to recover it. 00:30:54.172 [2024-11-20 16:25:29.872642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.172 [2024-11-20 16:25:29.872695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.172 [2024-11-20 16:25:29.872708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.172 [2024-11-20 16:25:29.872715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.172 [2024-11-20 16:25:29.872721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.172 [2024-11-20 16:25:29.872735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.172 qpair failed and we were unable to recover it. 00:30:54.172 [2024-11-20 16:25:29.882653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.172 [2024-11-20 16:25:29.882703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.172 [2024-11-20 16:25:29.882716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.172 [2024-11-20 16:25:29.882723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.172 [2024-11-20 16:25:29.882729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.172 [2024-11-20 16:25:29.882743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.172 qpair failed and we were unable to recover it. 00:30:54.172 [2024-11-20 16:25:29.892646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.172 [2024-11-20 16:25:29.892698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.172 [2024-11-20 16:25:29.892711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.172 [2024-11-20 16:25:29.892718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.172 [2024-11-20 16:25:29.892724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.172 [2024-11-20 16:25:29.892739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.172 qpair failed and we were unable to recover it. 00:30:54.172 [2024-11-20 16:25:29.902707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.172 [2024-11-20 16:25:29.902753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.172 [2024-11-20 16:25:29.902766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.172 [2024-11-20 16:25:29.902773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.172 [2024-11-20 16:25:29.902787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.172 [2024-11-20 16:25:29.902801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.172 qpair failed and we were unable to recover it. 00:30:54.172 [2024-11-20 16:25:29.912774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.172 [2024-11-20 16:25:29.912828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.172 [2024-11-20 16:25:29.912841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.172 [2024-11-20 16:25:29.912847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.172 [2024-11-20 16:25:29.912854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.172 [2024-11-20 16:25:29.912868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.172 qpair failed and we were unable to recover it. 00:30:54.172 [2024-11-20 16:25:29.922759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.172 [2024-11-20 16:25:29.922826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.172 [2024-11-20 16:25:29.922839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.172 [2024-11-20 16:25:29.922846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.172 [2024-11-20 16:25:29.922853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.172 [2024-11-20 16:25:29.922867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.172 qpair failed and we were unable to recover it. 00:30:54.172 [2024-11-20 16:25:29.932826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.172 [2024-11-20 16:25:29.932911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.172 [2024-11-20 16:25:29.932924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.172 [2024-11-20 16:25:29.932931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.172 [2024-11-20 16:25:29.932938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.172 [2024-11-20 16:25:29.932952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.172 qpair failed and we were unable to recover it. 00:30:54.173 [2024-11-20 16:25:29.942827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.173 [2024-11-20 16:25:29.942878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.173 [2024-11-20 16:25:29.942891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.173 [2024-11-20 16:25:29.942898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.173 [2024-11-20 16:25:29.942904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.173 [2024-11-20 16:25:29.942918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.173 qpair failed and we were unable to recover it. 00:30:54.173 [2024-11-20 16:25:29.952886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.173 [2024-11-20 16:25:29.952941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.173 [2024-11-20 16:25:29.952954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.173 [2024-11-20 16:25:29.952961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.173 [2024-11-20 16:25:29.952968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.173 [2024-11-20 16:25:29.952982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.173 qpair failed and we were unable to recover it. 00:30:54.173 [2024-11-20 16:25:29.962853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.173 [2024-11-20 16:25:29.962909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.173 [2024-11-20 16:25:29.962922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.173 [2024-11-20 16:25:29.962929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.173 [2024-11-20 16:25:29.962936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.173 [2024-11-20 16:25:29.962949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.173 qpair failed and we were unable to recover it. 00:30:54.173 [2024-11-20 16:25:29.972943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.173 [2024-11-20 16:25:29.973045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.173 [2024-11-20 16:25:29.973058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.173 [2024-11-20 16:25:29.973065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.173 [2024-11-20 16:25:29.973072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.173 [2024-11-20 16:25:29.973086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.173 qpair failed and we were unable to recover it. 00:30:54.173 [2024-11-20 16:25:29.982933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.173 [2024-11-20 16:25:29.983029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.173 [2024-11-20 16:25:29.983042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.173 [2024-11-20 16:25:29.983049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.173 [2024-11-20 16:25:29.983055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.173 [2024-11-20 16:25:29.983070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.173 qpair failed and we were unable to recover it. 00:30:54.173 [2024-11-20 16:25:29.992996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.173 [2024-11-20 16:25:29.993047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.173 [2024-11-20 16:25:29.993063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.173 [2024-11-20 16:25:29.993070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.173 [2024-11-20 16:25:29.993076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.173 [2024-11-20 16:25:29.993091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.173 qpair failed and we were unable to recover it. 00:30:54.173 [2024-11-20 16:25:30.003411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.173 [2024-11-20 16:25:30.003467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.173 [2024-11-20 16:25:30.003481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.173 [2024-11-20 16:25:30.003489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.173 [2024-11-20 16:25:30.003495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.173 [2024-11-20 16:25:30.003510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.173 qpair failed and we were unable to recover it. 00:30:54.173 [2024-11-20 16:25:30.013471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.173 [2024-11-20 16:25:30.013523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.173 [2024-11-20 16:25:30.013536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.173 [2024-11-20 16:25:30.013544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.173 [2024-11-20 16:25:30.013550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.173 [2024-11-20 16:25:30.013564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.173 qpair failed and we were unable to recover it. 00:30:54.173 [2024-11-20 16:25:30.023461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.173 [2024-11-20 16:25:30.023514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.173 [2024-11-20 16:25:30.023528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.173 [2024-11-20 16:25:30.023535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.173 [2024-11-20 16:25:30.023541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.173 [2024-11-20 16:25:30.023555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.173 qpair failed and we were unable to recover it. 00:30:54.173 [2024-11-20 16:25:30.033484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.173 [2024-11-20 16:25:30.033537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.173 [2024-11-20 16:25:30.033550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.173 [2024-11-20 16:25:30.033558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.173 [2024-11-20 16:25:30.033568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.173 [2024-11-20 16:25:30.033582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.173 qpair failed and we were unable to recover it. 00:30:54.173 [2024-11-20 16:25:30.043409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.173 [2024-11-20 16:25:30.043460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.173 [2024-11-20 16:25:30.043473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.173 [2024-11-20 16:25:30.043481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.173 [2024-11-20 16:25:30.043487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.173 [2024-11-20 16:25:30.043501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.173 qpair failed and we were unable to recover it. 00:30:54.173 [2024-11-20 16:25:30.053569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.174 [2024-11-20 16:25:30.053625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.174 [2024-11-20 16:25:30.053637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.174 [2024-11-20 16:25:30.053644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.174 [2024-11-20 16:25:30.053651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.174 [2024-11-20 16:25:30.053665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.174 qpair failed and we were unable to recover it. 00:30:54.174 [2024-11-20 16:25:30.063544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.174 [2024-11-20 16:25:30.063595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.174 [2024-11-20 16:25:30.063608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.174 [2024-11-20 16:25:30.063615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.174 [2024-11-20 16:25:30.063621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.174 [2024-11-20 16:25:30.063636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.174 qpair failed and we were unable to recover it. 00:30:54.174 [2024-11-20 16:25:30.073574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.174 [2024-11-20 16:25:30.073629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.174 [2024-11-20 16:25:30.073641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.174 [2024-11-20 16:25:30.073648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.174 [2024-11-20 16:25:30.073655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.174 [2024-11-20 16:25:30.073669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.174 qpair failed and we were unable to recover it. 00:30:54.174 [2024-11-20 16:25:30.083627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.174 [2024-11-20 16:25:30.083674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.174 [2024-11-20 16:25:30.083687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.174 [2024-11-20 16:25:30.083694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.174 [2024-11-20 16:25:30.083701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.174 [2024-11-20 16:25:30.083715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.174 qpair failed and we were unable to recover it. 00:30:54.174 [2024-11-20 16:25:30.093661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.174 [2024-11-20 16:25:30.093716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.174 [2024-11-20 16:25:30.093730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.174 [2024-11-20 16:25:30.093737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.174 [2024-11-20 16:25:30.093743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.174 [2024-11-20 16:25:30.093758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.174 qpair failed and we were unable to recover it. 00:30:54.174 [2024-11-20 16:25:30.103664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.174 [2024-11-20 16:25:30.103714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.174 [2024-11-20 16:25:30.103729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.174 [2024-11-20 16:25:30.103740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.174 [2024-11-20 16:25:30.103747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.174 [2024-11-20 16:25:30.103762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.174 qpair failed and we were unable to recover it. 00:30:54.436 [2024-11-20 16:25:30.113706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.436 [2024-11-20 16:25:30.113761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.436 [2024-11-20 16:25:30.113774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.436 [2024-11-20 16:25:30.113781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.436 [2024-11-20 16:25:30.113788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.436 [2024-11-20 16:25:30.113802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.436 qpair failed and we were unable to recover it. 00:30:54.436 [2024-11-20 16:25:30.123583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.436 [2024-11-20 16:25:30.123632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.436 [2024-11-20 16:25:30.123648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.436 [2024-11-20 16:25:30.123656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.436 [2024-11-20 16:25:30.123663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.436 [2024-11-20 16:25:30.123677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.436 qpair failed and we were unable to recover it. 00:30:54.436 [2024-11-20 16:25:30.133770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.436 [2024-11-20 16:25:30.133818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.436 [2024-11-20 16:25:30.133831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.436 [2024-11-20 16:25:30.133838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.436 [2024-11-20 16:25:30.133845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.436 [2024-11-20 16:25:30.133860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.436 qpair failed and we were unable to recover it. 00:30:54.436 [2024-11-20 16:25:30.143776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.436 [2024-11-20 16:25:30.143823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.436 [2024-11-20 16:25:30.143836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.436 [2024-11-20 16:25:30.143844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.436 [2024-11-20 16:25:30.143850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.436 [2024-11-20 16:25:30.143865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.436 qpair failed and we were unable to recover it. 00:30:54.436 [2024-11-20 16:25:30.153817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.436 [2024-11-20 16:25:30.153881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.436 [2024-11-20 16:25:30.153905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.436 [2024-11-20 16:25:30.153914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.436 [2024-11-20 16:25:30.153921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.436 [2024-11-20 16:25:30.153941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.436 qpair failed and we were unable to recover it. 00:30:54.436 [2024-11-20 16:25:30.163827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.436 [2024-11-20 16:25:30.163886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.436 [2024-11-20 16:25:30.163900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.436 [2024-11-20 16:25:30.163912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.436 [2024-11-20 16:25:30.163919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.436 [2024-11-20 16:25:30.163935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.436 qpair failed and we were unable to recover it. 00:30:54.436 [2024-11-20 16:25:30.173979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.437 [2024-11-20 16:25:30.174042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.437 [2024-11-20 16:25:30.174066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.437 [2024-11-20 16:25:30.174075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.437 [2024-11-20 16:25:30.174082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.437 [2024-11-20 16:25:30.174102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.437 qpair failed and we were unable to recover it. 00:30:54.437 [2024-11-20 16:25:30.183780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.437 [2024-11-20 16:25:30.183841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.437 [2024-11-20 16:25:30.183856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.437 [2024-11-20 16:25:30.183863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.437 [2024-11-20 16:25:30.183870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.437 [2024-11-20 16:25:30.183886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.437 qpair failed and we were unable to recover it. 00:30:54.437 [2024-11-20 16:25:30.193988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.437 [2024-11-20 16:25:30.194042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.437 [2024-11-20 16:25:30.194055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.437 [2024-11-20 16:25:30.194062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.437 [2024-11-20 16:25:30.194068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.437 [2024-11-20 16:25:30.194083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.437 qpair failed and we were unable to recover it. 00:30:54.437 [2024-11-20 16:25:30.203946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.437 [2024-11-20 16:25:30.203996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.437 [2024-11-20 16:25:30.204008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.437 [2024-11-20 16:25:30.204015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.437 [2024-11-20 16:25:30.204022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.437 [2024-11-20 16:25:30.204036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.437 qpair failed and we were unable to recover it. 00:30:54.437 [2024-11-20 16:25:30.213983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.437 [2024-11-20 16:25:30.214034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.437 [2024-11-20 16:25:30.214047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.437 [2024-11-20 16:25:30.214054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.437 [2024-11-20 16:25:30.214061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.437 [2024-11-20 16:25:30.214075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.437 qpair failed and we were unable to recover it. 00:30:54.437 [2024-11-20 16:25:30.223852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.437 [2024-11-20 16:25:30.223904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.437 [2024-11-20 16:25:30.223917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.437 [2024-11-20 16:25:30.223924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.437 [2024-11-20 16:25:30.223930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.437 [2024-11-20 16:25:30.223944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.437 qpair failed and we were unable to recover it. 00:30:54.437 [2024-11-20 16:25:30.234024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.437 [2024-11-20 16:25:30.234088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.437 [2024-11-20 16:25:30.234101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.437 [2024-11-20 16:25:30.234108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.437 [2024-11-20 16:25:30.234115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.437 [2024-11-20 16:25:30.234129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.437 qpair failed and we were unable to recover it. 00:30:54.437 [2024-11-20 16:25:30.243917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.437 [2024-11-20 16:25:30.243967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.437 [2024-11-20 16:25:30.243982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.437 [2024-11-20 16:25:30.243989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.437 [2024-11-20 16:25:30.243995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.437 [2024-11-20 16:25:30.244015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.437 qpair failed and we were unable to recover it. 00:30:54.437 [2024-11-20 16:25:30.254098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.437 [2024-11-20 16:25:30.254152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.437 [2024-11-20 16:25:30.254170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.437 [2024-11-20 16:25:30.254177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.437 [2024-11-20 16:25:30.254183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.437 [2024-11-20 16:25:30.254198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.437 qpair failed and we were unable to recover it. 00:30:54.437 [2024-11-20 16:25:30.264100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.437 [2024-11-20 16:25:30.264205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.437 [2024-11-20 16:25:30.264218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.437 [2024-11-20 16:25:30.264225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.437 [2024-11-20 16:25:30.264232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.437 [2024-11-20 16:25:30.264246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.437 qpair failed and we were unable to recover it. 00:30:54.437 [2024-11-20 16:25:30.274178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.437 [2024-11-20 16:25:30.274255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.437 [2024-11-20 16:25:30.274268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.437 [2024-11-20 16:25:30.274275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.437 [2024-11-20 16:25:30.274281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.437 [2024-11-20 16:25:30.274296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.437 qpair failed and we were unable to recover it. 00:30:54.437 [2024-11-20 16:25:30.284054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.437 [2024-11-20 16:25:30.284106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.437 [2024-11-20 16:25:30.284120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.437 [2024-11-20 16:25:30.284127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.437 [2024-11-20 16:25:30.284133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.437 [2024-11-20 16:25:30.284148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.437 qpair failed and we were unable to recover it. 00:30:54.437 [2024-11-20 16:25:30.294219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.437 [2024-11-20 16:25:30.294270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.437 [2024-11-20 16:25:30.294284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.437 [2024-11-20 16:25:30.294294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.437 [2024-11-20 16:25:30.294301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.437 [2024-11-20 16:25:30.294315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.437 qpair failed and we were unable to recover it. 00:30:54.437 [2024-11-20 16:25:30.304211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.438 [2024-11-20 16:25:30.304261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.438 [2024-11-20 16:25:30.304274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.438 [2024-11-20 16:25:30.304281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.438 [2024-11-20 16:25:30.304287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.438 [2024-11-20 16:25:30.304301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.438 qpair failed and we were unable to recover it. 00:30:54.438 [2024-11-20 16:25:30.314265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.438 [2024-11-20 16:25:30.314322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.438 [2024-11-20 16:25:30.314335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.438 [2024-11-20 16:25:30.314342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.438 [2024-11-20 16:25:30.314348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.438 [2024-11-20 16:25:30.314362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.438 qpair failed and we were unable to recover it. 00:30:54.438 [2024-11-20 16:25:30.324248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.438 [2024-11-20 16:25:30.324324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.438 [2024-11-20 16:25:30.324337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.438 [2024-11-20 16:25:30.324344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.438 [2024-11-20 16:25:30.324350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.438 [2024-11-20 16:25:30.324364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.438 qpair failed and we were unable to recover it. 00:30:54.438 [2024-11-20 16:25:30.334292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.438 [2024-11-20 16:25:30.334344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.438 [2024-11-20 16:25:30.334357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.438 [2024-11-20 16:25:30.334364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.438 [2024-11-20 16:25:30.334370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.438 [2024-11-20 16:25:30.334388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.438 qpair failed and we were unable to recover it. 00:30:54.438 [2024-11-20 16:25:30.344192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.438 [2024-11-20 16:25:30.344239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.438 [2024-11-20 16:25:30.344253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.438 [2024-11-20 16:25:30.344259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.438 [2024-11-20 16:25:30.344266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.438 [2024-11-20 16:25:30.344286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.438 qpair failed and we were unable to recover it. 00:30:54.438 [2024-11-20 16:25:30.354396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.438 [2024-11-20 16:25:30.354451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.438 [2024-11-20 16:25:30.354464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.438 [2024-11-20 16:25:30.354471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.438 [2024-11-20 16:25:30.354477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.438 [2024-11-20 16:25:30.354491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.438 qpair failed and we were unable to recover it. 00:30:54.438 [2024-11-20 16:25:30.364395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.438 [2024-11-20 16:25:30.364454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.438 [2024-11-20 16:25:30.364467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.438 [2024-11-20 16:25:30.364474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.438 [2024-11-20 16:25:30.364480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.438 [2024-11-20 16:25:30.364494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.438 qpair failed and we were unable to recover it. 00:30:54.701 [2024-11-20 16:25:30.374460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.701 [2024-11-20 16:25:30.374511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.701 [2024-11-20 16:25:30.374523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.701 [2024-11-20 16:25:30.374530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.701 [2024-11-20 16:25:30.374537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.701 [2024-11-20 16:25:30.374551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.701 qpair failed and we were unable to recover it. 00:30:54.701 [2024-11-20 16:25:30.384422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.701 [2024-11-20 16:25:30.384469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.701 [2024-11-20 16:25:30.384482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.701 [2024-11-20 16:25:30.384489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.701 [2024-11-20 16:25:30.384495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.701 [2024-11-20 16:25:30.384510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.701 qpair failed and we were unable to recover it. 00:30:54.701 [2024-11-20 16:25:30.394487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.701 [2024-11-20 16:25:30.394543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.701 [2024-11-20 16:25:30.394555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.701 [2024-11-20 16:25:30.394562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.701 [2024-11-20 16:25:30.394569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.701 [2024-11-20 16:25:30.394583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.701 qpair failed and we were unable to recover it. 00:30:54.701 [2024-11-20 16:25:30.404485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.701 [2024-11-20 16:25:30.404538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.701 [2024-11-20 16:25:30.404550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.701 [2024-11-20 16:25:30.404557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.701 [2024-11-20 16:25:30.404564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.701 [2024-11-20 16:25:30.404578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.701 qpair failed and we were unable to recover it. 00:30:54.701 [2024-11-20 16:25:30.414553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.701 [2024-11-20 16:25:30.414605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.701 [2024-11-20 16:25:30.414618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.701 [2024-11-20 16:25:30.414625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.701 [2024-11-20 16:25:30.414631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.701 [2024-11-20 16:25:30.414645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.701 qpair failed and we were unable to recover it. 00:30:54.701 [2024-11-20 16:25:30.424550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.701 [2024-11-20 16:25:30.424651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.701 [2024-11-20 16:25:30.424668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.701 [2024-11-20 16:25:30.424675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.701 [2024-11-20 16:25:30.424683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.701 [2024-11-20 16:25:30.424702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.701 qpair failed and we were unable to recover it. 00:30:54.701 [2024-11-20 16:25:30.434622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.701 [2024-11-20 16:25:30.434699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.701 [2024-11-20 16:25:30.434713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.701 [2024-11-20 16:25:30.434719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.701 [2024-11-20 16:25:30.434726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.701 [2024-11-20 16:25:30.434740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.701 qpair failed and we were unable to recover it. 00:30:54.701 [2024-11-20 16:25:30.444632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.701 [2024-11-20 16:25:30.444697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.701 [2024-11-20 16:25:30.444710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.701 [2024-11-20 16:25:30.444717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.701 [2024-11-20 16:25:30.444723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.701 [2024-11-20 16:25:30.444737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.701 qpair failed and we were unable to recover it. 00:30:54.701 [2024-11-20 16:25:30.454662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.701 [2024-11-20 16:25:30.454712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.701 [2024-11-20 16:25:30.454725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.701 [2024-11-20 16:25:30.454732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.701 [2024-11-20 16:25:30.454738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.701 [2024-11-20 16:25:30.454753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.701 qpair failed and we were unable to recover it. 00:30:54.701 [2024-11-20 16:25:30.464674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.701 [2024-11-20 16:25:30.464724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.701 [2024-11-20 16:25:30.464737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.701 [2024-11-20 16:25:30.464744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.701 [2024-11-20 16:25:30.464753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.701 [2024-11-20 16:25:30.464767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.701 qpair failed and we were unable to recover it. 00:30:54.701 [2024-11-20 16:25:30.474716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.702 [2024-11-20 16:25:30.474771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.702 [2024-11-20 16:25:30.474783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.702 [2024-11-20 16:25:30.474790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.702 [2024-11-20 16:25:30.474796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.702 [2024-11-20 16:25:30.474811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.702 qpair failed and we were unable to recover it. 00:30:54.702 [2024-11-20 16:25:30.484611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.702 [2024-11-20 16:25:30.484660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.702 [2024-11-20 16:25:30.484673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.702 [2024-11-20 16:25:30.484680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.702 [2024-11-20 16:25:30.484687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.702 [2024-11-20 16:25:30.484706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.702 qpair failed and we were unable to recover it. 00:30:54.702 [2024-11-20 16:25:30.494764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.702 [2024-11-20 16:25:30.494815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.702 [2024-11-20 16:25:30.494828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.702 [2024-11-20 16:25:30.494835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.702 [2024-11-20 16:25:30.494841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.702 [2024-11-20 16:25:30.494855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.702 qpair failed and we were unable to recover it. 00:30:54.702 [2024-11-20 16:25:30.504773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.702 [2024-11-20 16:25:30.504818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.702 [2024-11-20 16:25:30.504831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.702 [2024-11-20 16:25:30.504838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.702 [2024-11-20 16:25:30.504844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.702 [2024-11-20 16:25:30.504858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.702 qpair failed and we were unable to recover it. 00:30:54.702 [2024-11-20 16:25:30.514853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.702 [2024-11-20 16:25:30.514904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.702 [2024-11-20 16:25:30.514917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.702 [2024-11-20 16:25:30.514924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.702 [2024-11-20 16:25:30.514930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.702 [2024-11-20 16:25:30.514944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.702 qpair failed and we were unable to recover it. 00:30:54.702 [2024-11-20 16:25:30.524826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.702 [2024-11-20 16:25:30.524874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.702 [2024-11-20 16:25:30.524888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.702 [2024-11-20 16:25:30.524895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.702 [2024-11-20 16:25:30.524901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.702 [2024-11-20 16:25:30.524915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.702 qpair failed and we were unable to recover it. 00:30:54.702 [2024-11-20 16:25:30.534896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.702 [2024-11-20 16:25:30.534948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.702 [2024-11-20 16:25:30.534960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.702 [2024-11-20 16:25:30.534967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.702 [2024-11-20 16:25:30.534973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1848000b90 00:30:54.702 [2024-11-20 16:25:30.534987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.702 qpair failed and we were unable to recover it. 00:30:54.702 [2024-11-20 16:25:30.544896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.702 [2024-11-20 16:25:30.544994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.702 [2024-11-20 16:25:30.545057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.702 [2024-11-20 16:25:30.545082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.702 [2024-11-20 16:25:30.545103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1840000b90 00:30:54.702 [2024-11-20 16:25:30.545169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:54.702 qpair failed and we were unable to recover it. 00:30:54.702 [2024-11-20 16:25:30.554922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.702 [2024-11-20 16:25:30.555033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.702 [2024-11-20 16:25:30.555088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.702 [2024-11-20 16:25:30.555107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.702 [2024-11-20 16:25:30.555122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1840000b90 00:30:54.702 [2024-11-20 16:25:30.555173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:54.702 qpair failed and we were unable to recover it. 00:30:54.702 [2024-11-20 16:25:30.564994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.702 [2024-11-20 16:25:30.565090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.702 [2024-11-20 16:25:30.565153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.702 [2024-11-20 16:25:30.565193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.702 [2024-11-20 16:25:30.565214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f183c000b90 00:30:54.702 [2024-11-20 16:25:30.565273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:54.703 qpair failed and we were unable to recover it. 00:30:54.703 [2024-11-20 16:25:30.574983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.703 [2024-11-20 16:25:30.575052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.703 [2024-11-20 16:25:30.575082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.703 [2024-11-20 16:25:30.575097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.703 [2024-11-20 16:25:30.575112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f183c000b90 00:30:54.703 [2024-11-20 16:25:30.575143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:54.703 qpair failed and we were unable to recover it. 00:30:54.703 [2024-11-20 16:25:30.575604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1523e00 is same with the state(6) to be set 00:30:54.703 [2024-11-20 16:25:30.584973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.703 [2024-11-20 16:25:30.585079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.703 [2024-11-20 16:25:30.585143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.703 [2024-11-20 16:25:30.585179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.703 [2024-11-20 16:25:30.585201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x152e0c0 00:30:54.703 [2024-11-20 16:25:30.585254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:54.703 qpair failed and we were unable to recover it. 00:30:54.703 [2024-11-20 16:25:30.595098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.703 [2024-11-20 16:25:30.595230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.703 [2024-11-20 16:25:30.595279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.703 [2024-11-20 16:25:30.595306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.703 [2024-11-20 16:25:30.595321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x152e0c0 00:30:54.703 [2024-11-20 16:25:30.595362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:54.703 qpair failed and we were unable to recover it. 00:30:54.703 [2024-11-20 16:25:30.595820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1523e00 (9): Bad file descriptor 00:30:54.703 Initializing NVMe Controllers 00:30:54.703 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:54.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:54.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:54.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:54.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:54.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:54.703 Initialization complete. Launching workers. 00:30:54.703 Starting thread on core 1 00:30:54.703 Starting thread on core 2 00:30:54.703 Starting thread on core 3 00:30:54.703 Starting thread on core 0 00:30:54.703 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:54.703 00:30:54.703 real 0m11.363s 00:30:54.703 user 0m21.978s 00:30:54.703 sys 0m3.967s 00:30:54.703 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:54.703 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:54.703 ************************************ 00:30:54.703 END TEST nvmf_target_disconnect_tc2 00:30:54.703 ************************************ 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:54.964 rmmod nvme_tcp 00:30:54.964 rmmod nvme_fabrics 00:30:54.964 rmmod nvme_keyring 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1473885 ']' 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1473885 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1473885 ']' 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1473885 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1473885 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1473885' 00:30:54.964 killing process with pid 1473885 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1473885 00:30:54.964 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1473885 00:30:55.226 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:55.226 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:55.226 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:55.226 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:55.226 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:55.226 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:55.226 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:55.226 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:55.226 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:55.226 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.226 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:55.226 16:25:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.141 16:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:57.141 00:30:57.141 real 0m21.778s 00:30:57.141 user 0m49.496s 00:30:57.141 sys 0m10.136s 00:30:57.141 16:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:57.141 16:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:57.141 ************************************ 00:30:57.141 END TEST nvmf_target_disconnect 00:30:57.141 ************************************ 00:30:57.141 16:25:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:57.141 00:30:57.141 real 6m33.670s 00:30:57.141 user 11m27.582s 00:30:57.141 sys 2m15.976s 00:30:57.141 16:25:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:57.141 16:25:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.141 ************************************ 00:30:57.141 END TEST nvmf_host 00:30:57.141 ************************************ 00:30:57.401 16:25:33 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:57.401 16:25:33 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:57.401 16:25:33 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:57.401 16:25:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:57.401 16:25:33 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:57.401 16:25:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:57.401 ************************************ 00:30:57.401 START TEST nvmf_target_core_interrupt_mode 00:30:57.401 ************************************ 00:30:57.401 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:57.401 * Looking for test storage... 00:30:57.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:57.401 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:57.401 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:30:57.401 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:57.401 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:57.401 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:57.401 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:57.401 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:57.401 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:57.401 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:57.401 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:57.401 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:57.401 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:57.401 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:57.401 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:57.401 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:57.401 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:57.401 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:57.402 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:57.402 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:57.402 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:57.402 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:57.402 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:57.402 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:57.402 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:57.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.663 --rc genhtml_branch_coverage=1 00:30:57.663 --rc genhtml_function_coverage=1 00:30:57.663 --rc genhtml_legend=1 00:30:57.663 --rc geninfo_all_blocks=1 00:30:57.663 --rc geninfo_unexecuted_blocks=1 00:30:57.663 00:30:57.663 ' 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:57.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.663 --rc genhtml_branch_coverage=1 00:30:57.663 --rc genhtml_function_coverage=1 00:30:57.663 --rc genhtml_legend=1 00:30:57.663 --rc geninfo_all_blocks=1 00:30:57.663 --rc geninfo_unexecuted_blocks=1 00:30:57.663 00:30:57.663 ' 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:57.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.663 --rc genhtml_branch_coverage=1 00:30:57.663 --rc genhtml_function_coverage=1 00:30:57.663 --rc genhtml_legend=1 00:30:57.663 --rc geninfo_all_blocks=1 00:30:57.663 --rc geninfo_unexecuted_blocks=1 00:30:57.663 00:30:57.663 ' 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:57.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.663 --rc genhtml_branch_coverage=1 00:30:57.663 --rc genhtml_function_coverage=1 00:30:57.663 --rc genhtml_legend=1 00:30:57.663 --rc geninfo_all_blocks=1 00:30:57.663 --rc geninfo_unexecuted_blocks=1 00:30:57.663 00:30:57.663 ' 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:57.663 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:57.664 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:57.664 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:57.664 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:57.664 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:57.664 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:57.664 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:57.664 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:57.664 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:57.664 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:57.664 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:57.664 ************************************ 00:30:57.664 START TEST nvmf_abort 00:30:57.664 ************************************ 00:30:57.664 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:57.664 * Looking for test storage... 00:30:57.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:57.664 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:57.664 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:30:57.664 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:57.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.990 --rc genhtml_branch_coverage=1 00:30:57.990 --rc genhtml_function_coverage=1 00:30:57.990 --rc genhtml_legend=1 00:30:57.990 --rc geninfo_all_blocks=1 00:30:57.990 --rc geninfo_unexecuted_blocks=1 00:30:57.990 00:30:57.990 ' 00:30:57.990 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:57.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.991 --rc genhtml_branch_coverage=1 00:30:57.991 --rc genhtml_function_coverage=1 00:30:57.991 --rc genhtml_legend=1 00:30:57.991 --rc geninfo_all_blocks=1 00:30:57.991 --rc geninfo_unexecuted_blocks=1 00:30:57.991 00:30:57.991 ' 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:57.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.991 --rc genhtml_branch_coverage=1 00:30:57.991 --rc genhtml_function_coverage=1 00:30:57.991 --rc genhtml_legend=1 00:30:57.991 --rc geninfo_all_blocks=1 00:30:57.991 --rc geninfo_unexecuted_blocks=1 00:30:57.991 00:30:57.991 ' 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:57.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.991 --rc genhtml_branch_coverage=1 00:30:57.991 --rc genhtml_function_coverage=1 00:30:57.991 --rc genhtml_legend=1 00:30:57.991 --rc geninfo_all_blocks=1 00:30:57.991 --rc geninfo_unexecuted_blocks=1 00:30:57.991 00:30:57.991 ' 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:57.991 16:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:06.186 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:06.186 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:06.186 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:06.187 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:06.187 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:06.187 16:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:06.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:06.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.714 ms 00:31:06.187 00:31:06.187 --- 10.0.0.2 ping statistics --- 00:31:06.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.187 rtt min/avg/max/mdev = 0.714/0.714/0.714/0.000 ms 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:06.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:06.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:31:06.187 00:31:06.187 --- 10.0.0.1 ping statistics --- 00:31:06.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.187 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1479429 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1479429 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1479429 ']' 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:06.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:06.187 16:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:06.187 [2024-11-20 16:25:41.298630] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:06.187 [2024-11-20 16:25:41.299762] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:31:06.187 [2024-11-20 16:25:41.299811] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:06.187 [2024-11-20 16:25:41.402164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:06.187 [2024-11-20 16:25:41.453677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:06.187 [2024-11-20 16:25:41.453732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:06.187 [2024-11-20 16:25:41.453740] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:06.187 [2024-11-20 16:25:41.453752] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:06.187 [2024-11-20 16:25:41.453758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:06.187 [2024-11-20 16:25:41.455702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:06.187 [2024-11-20 16:25:41.455861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.187 [2024-11-20 16:25:41.455863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:06.187 [2024-11-20 16:25:41.532135] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:06.187 [2024-11-20 16:25:41.533180] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:06.187 [2024-11-20 16:25:41.533732] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:06.187 [2024-11-20 16:25:41.533861] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:06.448 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:06.448 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:31:06.448 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:06.448 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:06.448 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:06.448 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:06.448 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:31:06.448 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.448 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:06.448 [2024-11-20 16:25:42.184769] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:06.448 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.448 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:31:06.448 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.448 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:06.448 Malloc0 00:31:06.449 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.449 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:06.449 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.449 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:06.449 Delay0 00:31:06.449 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.449 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:06.449 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.449 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:06.449 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.449 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:31:06.449 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.449 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:06.449 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.449 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:06.449 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.449 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:06.449 [2024-11-20 16:25:42.288771] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.449 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.449 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:06.449 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.449 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:06.449 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.449 16:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:31:06.709 [2024-11-20 16:25:42.474378] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:08.624 Initializing NVMe Controllers 00:31:08.624 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:08.624 controller IO queue size 128 less than required 00:31:08.624 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:31:08.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:31:08.624 Initialization complete. Launching workers. 00:31:08.624 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28385 00:31:08.624 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28442, failed to submit 66 00:31:08.624 success 28385, unsuccessful 57, failed 0 00:31:08.624 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:08.624 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.624 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.624 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.624 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:31:08.624 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:31:08.624 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:08.624 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:31:08.624 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:08.624 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:31:08.624 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:08.624 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:08.624 rmmod nvme_tcp 00:31:08.624 rmmod nvme_fabrics 00:31:08.885 rmmod nvme_keyring 00:31:08.885 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:08.885 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:31:08.885 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:31:08.885 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1479429 ']' 00:31:08.885 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1479429 00:31:08.885 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1479429 ']' 00:31:08.885 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1479429 00:31:08.885 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:31:08.885 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:08.885 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1479429 00:31:08.885 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:08.885 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:08.885 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1479429' 00:31:08.885 killing process with pid 1479429 00:31:08.885 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1479429 00:31:08.885 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1479429 00:31:09.146 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:09.146 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:09.146 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:09.146 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:31:09.147 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:31:09.147 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:09.147 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:31:09.147 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:09.147 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:09.147 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.147 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:09.147 16:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.061 16:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:11.061 00:31:11.061 real 0m13.496s 00:31:11.061 user 0m11.011s 00:31:11.061 sys 0m7.014s 00:31:11.061 16:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:11.061 16:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:11.061 ************************************ 00:31:11.061 END TEST nvmf_abort 00:31:11.061 ************************************ 00:31:11.061 16:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:11.061 16:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:11.061 16:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:11.061 16:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:11.323 ************************************ 00:31:11.323 START TEST nvmf_ns_hotplug_stress 00:31:11.323 ************************************ 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:11.324 * Looking for test storage... 00:31:11.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:11.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.324 --rc genhtml_branch_coverage=1 00:31:11.324 --rc genhtml_function_coverage=1 00:31:11.324 --rc genhtml_legend=1 00:31:11.324 --rc geninfo_all_blocks=1 00:31:11.324 --rc geninfo_unexecuted_blocks=1 00:31:11.324 00:31:11.324 ' 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:11.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.324 --rc genhtml_branch_coverage=1 00:31:11.324 --rc genhtml_function_coverage=1 00:31:11.324 --rc genhtml_legend=1 00:31:11.324 --rc geninfo_all_blocks=1 00:31:11.324 --rc geninfo_unexecuted_blocks=1 00:31:11.324 00:31:11.324 ' 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:11.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.324 --rc genhtml_branch_coverage=1 00:31:11.324 --rc genhtml_function_coverage=1 00:31:11.324 --rc genhtml_legend=1 00:31:11.324 --rc geninfo_all_blocks=1 00:31:11.324 --rc geninfo_unexecuted_blocks=1 00:31:11.324 00:31:11.324 ' 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:11.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.324 --rc genhtml_branch_coverage=1 00:31:11.324 --rc genhtml_function_coverage=1 00:31:11.324 --rc genhtml_legend=1 00:31:11.324 --rc geninfo_all_blocks=1 00:31:11.324 --rc geninfo_unexecuted_blocks=1 00:31:11.324 00:31:11.324 ' 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.324 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:31:11.325 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.325 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:31:11.325 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:11.325 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:11.325 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:11.325 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:11.325 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:11.325 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:11.325 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:11.325 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:11.325 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:11.325 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:11.325 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:11.325 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:31:11.325 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:11.325 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:11.325 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:11.325 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:11.325 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:11.325 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.325 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:11.325 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.586 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:11.586 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:11.586 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:31:11.586 16:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:19.724 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:19.724 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:19.724 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:19.725 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:19.725 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:19.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:19.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:31:19.725 00:31:19.725 --- 10.0.0.2 ping statistics --- 00:31:19.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.725 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:19.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:19.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:31:19.725 00:31:19.725 --- 10.0.0.1 ping statistics --- 00:31:19.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.725 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1484122 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1484122 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1484122 ']' 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:19.725 16:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:19.725 [2024-11-20 16:25:54.783739] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:19.725 [2024-11-20 16:25:54.784878] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:31:19.725 [2024-11-20 16:25:54.784930] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:19.725 [2024-11-20 16:25:54.886044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:19.725 [2024-11-20 16:25:54.937702] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:19.725 [2024-11-20 16:25:54.937750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:19.725 [2024-11-20 16:25:54.937758] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:19.725 [2024-11-20 16:25:54.937765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:19.725 [2024-11-20 16:25:54.937772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:19.725 [2024-11-20 16:25:54.939604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:19.725 [2024-11-20 16:25:54.939767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:19.725 [2024-11-20 16:25:54.939767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:19.725 [2024-11-20 16:25:55.016314] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:19.725 [2024-11-20 16:25:55.017330] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:19.725 [2024-11-20 16:25:55.017902] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:19.725 [2024-11-20 16:25:55.018030] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:19.725 16:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:19.725 16:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:31:19.725 16:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:19.725 16:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:19.725 16:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:19.725 16:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:19.725 16:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:31:19.726 16:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:19.987 [2024-11-20 16:25:55.808775] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:19.987 16:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:20.247 16:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:20.509 [2024-11-20 16:25:56.181448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.509 16:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:20.509 16:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:31:20.769 Malloc0 00:31:20.769 16:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:21.029 Delay0 00:31:21.030 16:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:21.030 16:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:31:21.291 NULL1 00:31:21.291 16:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:31:21.551 16:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1484750 00:31:21.551 16:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:21.551 16:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:31:21.551 16:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.813 16:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:21.813 16:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:31:21.813 16:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:31:22.171 true 00:31:22.171 16:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:22.171 16:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:22.483 16:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:22.483 16:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:31:22.483 16:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:31:22.745 true 00:31:22.745 16:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:22.745 16:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.005 16:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:23.266 16:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:31:23.266 16:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:31:23.266 true 00:31:23.266 16:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:23.266 16:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.526 16:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:23.785 16:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:31:23.785 16:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:31:23.785 true 00:31:24.045 16:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:24.045 16:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.045 16:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:24.305 16:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:31:24.305 16:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:31:24.565 true 00:31:24.565 16:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:24.565 16:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.565 16:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:24.825 16:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:31:24.825 16:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:31:25.086 true 00:31:25.086 16:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:25.086 16:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.346 16:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:25.346 16:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:31:25.346 16:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:31:25.607 true 00:31:25.607 16:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:25.607 16:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.867 16:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:25.867 16:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:31:25.867 16:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:31:26.128 true 00:31:26.128 16:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:26.128 16:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:26.388 16:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:26.649 16:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:31:26.649 16:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:31:26.649 true 00:31:26.649 16:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:26.649 16:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:26.909 16:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:27.170 16:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:31:27.170 16:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:31:27.170 true 00:31:27.170 16:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:27.170 16:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.430 16:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:27.691 16:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:31:27.691 16:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:31:27.691 true 00:31:27.952 16:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:27.952 16:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.952 16:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:28.214 16:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:31:28.215 16:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:31:28.215 true 00:31:28.476 16:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:28.476 16:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.477 16:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:28.739 16:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:31:28.739 16:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:31:29.000 true 00:31:29.000 16:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:29.000 16:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.000 16:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:29.261 16:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:31:29.261 16:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:31:29.521 true 00:31:29.521 16:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:29.521 16:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.782 16:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:29.782 16:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:31:29.782 16:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:31:30.042 true 00:31:30.042 16:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:30.042 16:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.304 16:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:30.304 16:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:31:30.304 16:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:31:30.564 true 00:31:30.564 16:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:30.564 16:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.825 16:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:30.825 16:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:31:30.825 16:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:31:31.085 true 00:31:31.085 16:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:31.085 16:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.345 16:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:31.606 16:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:31:31.606 16:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:31:31.606 true 00:31:31.606 16:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:31.606 16:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.866 16:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:32.126 16:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:31:32.126 16:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:31:32.126 true 00:31:32.126 16:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:32.126 16:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.387 16:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:32.648 16:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:31:32.648 16:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:31:32.909 true 00:31:32.909 16:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:32.909 16:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.909 16:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:33.170 16:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:33.170 16:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:33.434 true 00:31:33.434 16:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:33.434 16:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:33.434 16:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:33.697 16:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:33.697 16:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:33.957 true 00:31:33.957 16:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:33.957 16:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:34.218 16:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:34.218 16:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:34.218 16:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:34.480 true 00:31:34.480 16:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:34.480 16:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:34.741 16:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:34.741 16:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:34.741 16:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:35.002 true 00:31:35.002 16:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:35.002 16:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:35.263 16:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:35.524 16:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:35.525 16:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:35.525 true 00:31:35.525 16:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:35.525 16:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:35.786 16:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:36.046 16:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:36.046 16:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:36.047 true 00:31:36.047 16:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:36.047 16:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:36.308 16:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:36.570 16:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:36.570 16:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:36.570 true 00:31:36.831 16:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:36.831 16:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:36.831 16:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:37.091 16:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:37.091 16:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:37.351 true 00:31:37.352 16:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:37.352 16:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:37.352 16:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:37.612 16:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:31:37.612 16:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:31:37.872 true 00:31:37.872 16:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:37.872 16:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:38.131 16:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:38.132 16:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:31:38.132 16:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:31:38.392 true 00:31:38.392 16:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:38.392 16:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:38.652 16:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:38.652 16:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:31:38.652 16:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:31:38.913 true 00:31:38.913 16:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:38.913 16:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:39.173 16:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:39.433 16:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:31:39.433 16:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:31:39.433 true 00:31:39.433 16:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:39.433 16:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:39.694 16:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:39.955 16:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:31:39.955 16:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:31:39.955 true 00:31:39.955 16:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:39.955 16:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:40.216 16:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:40.478 16:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:31:40.478 16:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:31:40.740 true 00:31:40.740 16:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:40.740 16:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:40.740 16:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:41.000 16:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:31:41.000 16:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:31:41.260 true 00:31:41.260 16:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:41.260 16:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:41.260 16:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:41.520 16:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:31:41.520 16:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:31:41.780 true 00:31:41.780 16:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:41.780 16:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:42.041 16:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:42.041 16:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:31:42.041 16:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:31:42.301 true 00:31:42.301 16:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:42.301 16:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:42.561 16:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:42.561 16:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:31:42.561 16:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:31:42.822 true 00:31:42.822 16:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:42.822 16:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:43.083 16:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:43.343 16:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:31:43.343 16:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:31:43.343 true 00:31:43.343 16:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:43.343 16:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:43.605 16:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:43.867 16:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:31:43.867 16:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:31:44.128 true 00:31:44.128 16:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:44.128 16:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:44.128 16:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:44.388 16:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:31:44.389 16:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:31:44.649 true 00:31:44.649 16:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:44.649 16:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:44.911 16:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:44.911 16:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:31:44.911 16:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:31:45.172 true 00:31:45.172 16:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:45.172 16:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:45.433 16:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:45.433 16:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:31:45.433 16:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:31:45.695 true 00:31:45.695 16:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:45.695 16:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:45.957 16:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:46.218 16:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:31:46.218 16:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:31:46.218 true 00:31:46.218 16:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:46.218 16:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:46.480 16:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:46.741 16:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:31:46.741 16:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:31:46.741 true 00:31:46.741 16:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:46.741 16:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:47.001 16:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:47.263 16:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:31:47.263 16:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:31:47.263 true 00:31:47.524 16:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:47.524 16:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:47.524 16:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:47.785 16:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:31:47.785 16:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:31:47.785 true 00:31:48.047 16:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:48.047 16:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:48.047 16:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:48.308 16:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:31:48.308 16:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:31:48.568 true 00:31:48.568 16:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:48.568 16:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:48.568 16:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:48.828 16:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:31:48.828 16:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:31:49.089 true 00:31:49.089 16:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:49.089 16:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:49.349 16:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:49.349 16:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:31:49.349 16:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:31:49.610 true 00:31:49.610 16:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:49.610 16:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:49.870 16:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:49.870 16:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:31:49.870 16:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:31:50.131 true 00:31:50.131 16:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:50.131 16:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:50.392 16:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:50.654 16:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:31:50.654 16:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:31:50.654 true 00:31:50.654 16:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:50.654 16:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:50.915 16:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:51.177 16:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:31:51.177 16:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:31:51.177 true 00:31:51.177 16:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:51.177 16:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:51.439 16:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:51.703 16:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:31:51.703 16:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:31:51.703 Initializing NVMe Controllers 00:31:51.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:51.703 Controller IO queue size 128, less than required. 00:31:51.703 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:51.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:51.703 Initialization complete. Launching workers. 00:31:51.703 ======================================================== 00:31:51.703 Latency(us) 00:31:51.703 Device Information : IOPS MiB/s Average min max 00:31:51.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30460.80 14.87 4202.18 1086.92 11459.79 00:31:51.703 ======================================================== 00:31:51.703 Total : 30460.80 14.87 4202.18 1086.92 11459.79 00:31:51.703 00:31:51.703 true 00:31:51.703 16:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1484750 00:31:51.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1484750) - No such process 00:31:51.963 16:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1484750 00:31:51.963 16:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:51.963 16:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:52.223 16:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:52.223 16:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:52.223 16:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:52.223 16:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:52.223 16:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:52.484 null0 00:31:52.484 16:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:52.484 16:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:52.484 16:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:52.484 null1 00:31:52.484 16:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:52.484 16:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:52.484 16:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:52.745 null2 00:31:52.745 16:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:52.745 16:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:52.746 16:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:53.006 null3 00:31:53.006 16:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:53.006 16:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:53.006 16:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:53.006 null4 00:31:53.006 16:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:53.006 16:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:53.006 16:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:53.267 null5 00:31:53.267 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:53.267 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:53.267 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:53.541 null6 00:31:53.541 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:53.541 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:53.541 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:53.541 null7 00:31:53.541 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:53.541 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:53.541 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:53.541 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:53.541 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:53.541 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:53.541 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:53.542 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:53.542 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:53.542 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:53.542 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:53.542 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:53.542 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:53.542 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:53.542 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:53.542 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:53.542 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:53.542 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:53.542 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:53.542 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:53.542 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:53.542 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:53.542 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:53.542 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:53.542 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:53.542 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:53.543 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:53.543 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:53.543 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:53.543 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:53.543 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:53.543 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:53.543 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:53.543 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:53.543 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:53.543 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:53.543 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:53.543 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:53.543 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:53.543 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:53.543 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:53.543 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:53.543 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:53.543 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1490982 1490983 1490986 1490987 1490989 1490991 1490993 1490995 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:53.544 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:53.811 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:53.811 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:53.811 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:53.811 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:53.811 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:53.811 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:53.811 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:53.811 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.073 16:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:54.334 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:54.334 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:54.334 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:54.334 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:54.334 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:54.334 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:54.334 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:54.334 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:54.334 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.334 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.334 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:54.595 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:54.855 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.855 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.855 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:54.855 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.855 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.855 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.855 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:54.855 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.855 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:54.856 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.856 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.856 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:54.856 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.856 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.856 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:54.856 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.856 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.856 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:54.856 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.856 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.856 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:54.856 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:54.856 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:54.856 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:54.856 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:55.116 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:55.116 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:55.116 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:55.116 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:55.116 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:55.116 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:55.116 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:55.116 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:55.116 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.116 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:55.116 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:55.116 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.116 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:55.116 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:55.116 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.116 16:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:55.116 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:55.116 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.116 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:55.116 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:55.116 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.116 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:55.377 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:55.377 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:55.377 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.377 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:55.377 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:55.377 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.377 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:55.377 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:55.377 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.377 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:55.378 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:55.378 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:55.378 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:55.378 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:55.378 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:55.378 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.378 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:55.378 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:55.378 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:55.378 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:55.638 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:55.898 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:55.898 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:55.898 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:55.898 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.898 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:55.898 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:55.898 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:55.898 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.898 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:55.898 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:55.898 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:55.898 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.898 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:55.899 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:55.899 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:55.899 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.899 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:55.899 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:55.899 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.899 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:55.899 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:56.159 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.159 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.159 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:56.159 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:56.159 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.159 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.159 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:56.160 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:56.160 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.160 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.160 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:56.160 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:56.160 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:56.160 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.160 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.160 16:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:56.160 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.160 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.160 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:56.160 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.160 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.160 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:56.160 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.421 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:56.682 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.682 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.682 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:56.682 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.682 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.682 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:56.682 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:56.682 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.682 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.682 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:56.682 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.682 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.682 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:56.682 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:56.682 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:56.682 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:56.682 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:56.682 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.943 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:57.204 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:57.204 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:57.204 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:57.204 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:57.204 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:57.205 16:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:57.205 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:57.205 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:57.205 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.205 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.205 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:57.205 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.205 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.205 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.205 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.205 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.205 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.205 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:57.205 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.205 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.205 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:57.205 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.205 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.465 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.465 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.465 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.465 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.465 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:57.465 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:57.465 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:57.727 rmmod nvme_tcp 00:31:57.727 rmmod nvme_fabrics 00:31:57.727 rmmod nvme_keyring 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1484122 ']' 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1484122 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1484122 ']' 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1484122 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:57.727 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1484122 00:31:57.988 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:57.988 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:57.988 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1484122' 00:31:57.988 killing process with pid 1484122 00:31:57.988 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1484122 00:31:57.988 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1484122 00:31:57.988 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:57.988 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:57.988 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:57.988 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:57.988 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:57.988 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:57.988 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:57.988 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:57.988 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:57.988 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.988 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.988 16:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.535 16:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:00.535 00:32:00.535 real 0m48.889s 00:32:00.535 user 3m2.507s 00:32:00.535 sys 0m22.410s 00:32:00.535 16:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:00.535 16:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:00.535 ************************************ 00:32:00.535 END TEST nvmf_ns_hotplug_stress 00:32:00.535 ************************************ 00:32:00.535 16:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:00.535 16:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:00.535 16:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:00.535 16:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:00.535 ************************************ 00:32:00.535 START TEST nvmf_delete_subsystem 00:32:00.535 ************************************ 00:32:00.535 16:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:00.535 * Looking for test storage... 00:32:00.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:00.535 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:00.535 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:32:00.535 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:00.535 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:00.535 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:00.535 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:00.535 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:00.535 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:32:00.535 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:32:00.535 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:32:00.535 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:32:00.535 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:32:00.535 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:32:00.535 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:32:00.535 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:00.535 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:32:00.535 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:32:00.535 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:00.535 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:00.535 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:32:00.535 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:32:00.535 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:00.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.536 --rc genhtml_branch_coverage=1 00:32:00.536 --rc genhtml_function_coverage=1 00:32:00.536 --rc genhtml_legend=1 00:32:00.536 --rc geninfo_all_blocks=1 00:32:00.536 --rc geninfo_unexecuted_blocks=1 00:32:00.536 00:32:00.536 ' 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:00.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.536 --rc genhtml_branch_coverage=1 00:32:00.536 --rc genhtml_function_coverage=1 00:32:00.536 --rc genhtml_legend=1 00:32:00.536 --rc geninfo_all_blocks=1 00:32:00.536 --rc geninfo_unexecuted_blocks=1 00:32:00.536 00:32:00.536 ' 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:00.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.536 --rc genhtml_branch_coverage=1 00:32:00.536 --rc genhtml_function_coverage=1 00:32:00.536 --rc genhtml_legend=1 00:32:00.536 --rc geninfo_all_blocks=1 00:32:00.536 --rc geninfo_unexecuted_blocks=1 00:32:00.536 00:32:00.536 ' 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:00.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.536 --rc genhtml_branch_coverage=1 00:32:00.536 --rc genhtml_function_coverage=1 00:32:00.536 --rc genhtml_legend=1 00:32:00.536 --rc geninfo_all_blocks=1 00:32:00.536 --rc geninfo_unexecuted_blocks=1 00:32:00.536 00:32:00.536 ' 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:32:00.536 16:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:08.681 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:08.681 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:08.681 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.681 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:08.682 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:08.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:08.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:32:08.682 00:32:08.682 --- 10.0.0.2 ping statistics --- 00:32:08.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.682 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:08.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:08.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:32:08.682 00:32:08.682 --- 10.0.0.1 ping statistics --- 00:32:08.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.682 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1496141 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1496141 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1496141 ']' 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:08.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:08.682 16:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:08.682 [2024-11-20 16:26:43.793411] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:08.682 [2024-11-20 16:26:43.794544] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:32:08.682 [2024-11-20 16:26:43.794594] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:08.682 [2024-11-20 16:26:43.893891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:08.682 [2024-11-20 16:26:43.945426] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:08.682 [2024-11-20 16:26:43.945476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:08.682 [2024-11-20 16:26:43.945485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:08.682 [2024-11-20 16:26:43.945498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:08.682 [2024-11-20 16:26:43.945504] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:08.682 [2024-11-20 16:26:43.947105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.682 [2024-11-20 16:26:43.947111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:08.682 [2024-11-20 16:26:44.023564] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:08.682 [2024-11-20 16:26:44.024259] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:08.682 [2024-11-20 16:26:44.024504] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:08.682 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:08.682 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:32:08.682 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:08.682 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:08.682 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:08.944 [2024-11-20 16:26:44.660195] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:08.944 [2024-11-20 16:26:44.692650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:08.944 NULL1 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:08.944 Delay0 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1496220 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:32:08.944 16:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:08.944 [2024-11-20 16:26:44.817892] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:10.858 16:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:10.858 16:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.858 16:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 starting I/O failed: -6 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 starting I/O failed: -6 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 starting I/O failed: -6 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 starting I/O failed: -6 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 starting I/O failed: -6 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 starting I/O failed: -6 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 starting I/O failed: -6 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 starting I/O failed: -6 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 starting I/O failed: -6 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 starting I/O failed: -6 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 [2024-11-20 16:26:46.902540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c494a0 is same with the state(6) to be set 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Write completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.120 Read completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Write completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Write completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Write completed with error (sct=0, sc=8) 00:32:11.121 Write completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Write completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 starting I/O failed: -6 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Write completed with error (sct=0, sc=8) 00:32:11.121 Write completed with error (sct=0, sc=8) 00:32:11.121 Write completed with error (sct=0, sc=8) 00:32:11.121 starting I/O failed: -6 00:32:11.121 Write completed with error (sct=0, sc=8) 00:32:11.121 Write completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 starting I/O failed: -6 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Write completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 starting I/O failed: -6 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Write completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 starting I/O failed: -6 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Write completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Write completed with error (sct=0, sc=8) 00:32:11.121 starting I/O failed: -6 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Write completed with error (sct=0, sc=8) 00:32:11.121 Write completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 starting I/O failed: -6 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Write completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Write completed with error (sct=0, sc=8) 00:32:11.121 starting I/O failed: -6 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 starting I/O failed: -6 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Write completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 starting I/O failed: -6 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Write completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 Write completed with error (sct=0, sc=8) 00:32:11.121 starting I/O failed: -6 00:32:11.121 Write completed with error (sct=0, sc=8) 00:32:11.121 Read completed with error (sct=0, sc=8) 00:32:11.121 [2024-11-20 16:26:46.903700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e4800d490 is same with the state(6) to be set 00:32:12.065 [2024-11-20 16:26:47.874494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4a9a0 is same with the state(6) to be set 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 [2024-11-20 16:26:47.906198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e4800d7c0 is same with the state(6) to be set 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 [2024-11-20 16:26:47.906341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e4800d020 is same with the state(6) to be set 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 [2024-11-20 16:26:47.906494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5e48000c40 is same with the state(6) to be set 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.065 Read completed with error (sct=0, sc=8) 00:32:12.065 Write completed with error (sct=0, sc=8) 00:32:12.066 Read completed with error (sct=0, sc=8) 00:32:12.066 Write completed with error (sct=0, sc=8) 00:32:12.066 Read completed with error (sct=0, sc=8) 00:32:12.066 Read completed with error (sct=0, sc=8) 00:32:12.066 Write completed with error (sct=0, sc=8) 00:32:12.066 Read completed with error (sct=0, sc=8) 00:32:12.066 Read completed with error (sct=0, sc=8) 00:32:12.066 Write completed with error (sct=0, sc=8) 00:32:12.066 Read completed with error (sct=0, sc=8) 00:32:12.066 [2024-11-20 16:26:47.906974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c49680 is same with the state(6) to be set 00:32:12.066 Initializing NVMe Controllers 00:32:12.066 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:12.066 Controller IO queue size 128, less than required. 00:32:12.066 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:12.066 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:12.066 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:12.066 Initialization complete. Launching workers. 00:32:12.066 ======================================================== 00:32:12.066 Latency(us) 00:32:12.066 Device Information : IOPS MiB/s Average min max 00:32:12.066 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 155.51 0.08 877503.80 339.13 1010671.98 00:32:12.066 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 171.91 0.08 1068373.26 700.73 2002233.29 00:32:12.066 ======================================================== 00:32:12.066 Total : 327.42 0.16 977717.51 339.13 2002233.29 00:32:12.066 00:32:12.066 [2024-11-20 16:26:47.907492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4a9a0 (9): Bad file descriptor 00:32:12.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:32:12.066 16:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.066 16:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:32:12.066 16:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1496220 00:32:12.066 16:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1496220 00:32:12.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1496220) - No such process 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1496220 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1496220 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1496220 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:12.639 [2024-11-20 16:26:48.440478] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.639 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:12.640 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.640 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:12.640 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.640 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1496923 00:32:12.640 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:32:12.640 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1496923 00:32:12.640 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:12.640 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:12.640 [2024-11-20 16:26:48.538029] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:13.212 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:13.212 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1496923 00:32:13.212 16:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:13.781 16:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:13.781 16:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1496923 00:32:13.781 16:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:14.040 16:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:14.040 16:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1496923 00:32:14.040 16:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:14.611 16:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:14.611 16:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1496923 00:32:14.611 16:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:15.182 16:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:15.182 16:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1496923 00:32:15.183 16:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:15.752 16:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:15.752 16:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1496923 00:32:15.752 16:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:15.752 Initializing NVMe Controllers 00:32:15.752 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:15.752 Controller IO queue size 128, less than required. 00:32:15.752 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:15.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:15.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:15.752 Initialization complete. Launching workers. 00:32:15.752 ======================================================== 00:32:15.752 Latency(us) 00:32:15.752 Device Information : IOPS MiB/s Average min max 00:32:15.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003218.88 1000273.70 1042216.22 00:32:15.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004166.15 1000399.99 1010676.31 00:32:15.752 ======================================================== 00:32:15.752 Total : 256.00 0.12 1003692.52 1000273.70 1042216.22 00:32:15.752 00:32:16.323 16:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:16.323 16:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1496923 00:32:16.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1496923) - No such process 00:32:16.323 16:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1496923 00:32:16.323 16:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:32:16.323 16:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:32:16.323 16:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:16.323 16:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:32:16.323 16:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:16.323 16:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:32:16.323 16:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:16.323 16:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:16.323 rmmod nvme_tcp 00:32:16.323 rmmod nvme_fabrics 00:32:16.323 rmmod nvme_keyring 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1496141 ']' 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1496141 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1496141 ']' 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1496141 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1496141 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1496141' 00:32:16.323 killing process with pid 1496141 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1496141 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1496141 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:16.323 16:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:18.868 00:32:18.868 real 0m18.334s 00:32:18.868 user 0m26.442s 00:32:18.868 sys 0m7.435s 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:18.868 ************************************ 00:32:18.868 END TEST nvmf_delete_subsystem 00:32:18.868 ************************************ 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:18.868 ************************************ 00:32:18.868 START TEST nvmf_host_management 00:32:18.868 ************************************ 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:18.868 * Looking for test storage... 00:32:18.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:18.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.868 --rc genhtml_branch_coverage=1 00:32:18.868 --rc genhtml_function_coverage=1 00:32:18.868 --rc genhtml_legend=1 00:32:18.868 --rc geninfo_all_blocks=1 00:32:18.868 --rc geninfo_unexecuted_blocks=1 00:32:18.868 00:32:18.868 ' 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:18.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.868 --rc genhtml_branch_coverage=1 00:32:18.868 --rc genhtml_function_coverage=1 00:32:18.868 --rc genhtml_legend=1 00:32:18.868 --rc geninfo_all_blocks=1 00:32:18.868 --rc geninfo_unexecuted_blocks=1 00:32:18.868 00:32:18.868 ' 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:18.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.868 --rc genhtml_branch_coverage=1 00:32:18.868 --rc genhtml_function_coverage=1 00:32:18.868 --rc genhtml_legend=1 00:32:18.868 --rc geninfo_all_blocks=1 00:32:18.868 --rc geninfo_unexecuted_blocks=1 00:32:18.868 00:32:18.868 ' 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:18.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.868 --rc genhtml_branch_coverage=1 00:32:18.868 --rc genhtml_function_coverage=1 00:32:18.868 --rc genhtml_legend=1 00:32:18.868 --rc geninfo_all_blocks=1 00:32:18.868 --rc geninfo_unexecuted_blocks=1 00:32:18.868 00:32:18.868 ' 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:32:18.868 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:32:18.869 16:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:27.010 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:27.010 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:27.010 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:27.011 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:27.011 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:27.011 16:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:27.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:27.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:32:27.011 00:32:27.011 --- 10.0.0.2 ping statistics --- 00:32:27.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:27.011 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:27.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:27.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:32:27.011 00:32:27.011 --- 10.0.0.1 ping statistics --- 00:32:27.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:27.011 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:27.011 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:27.012 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1501961 00:32:27.012 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1501961 00:32:27.012 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:32:27.012 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1501961 ']' 00:32:27.012 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:27.012 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:27.012 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:27.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:27.012 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:27.012 16:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:27.012 [2024-11-20 16:27:02.256991] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:27.012 [2024-11-20 16:27:02.258116] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:32:27.012 [2024-11-20 16:27:02.258173] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:27.012 [2024-11-20 16:27:02.359608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:27.012 [2024-11-20 16:27:02.413235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:27.012 [2024-11-20 16:27:02.413286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:27.012 [2024-11-20 16:27:02.413295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:27.012 [2024-11-20 16:27:02.413302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:27.012 [2024-11-20 16:27:02.413308] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:27.012 [2024-11-20 16:27:02.415314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:27.012 [2024-11-20 16:27:02.415574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:27.012 [2024-11-20 16:27:02.415733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:27.012 [2024-11-20 16:27:02.415733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.012 [2024-11-20 16:27:02.494289] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:27.012 [2024-11-20 16:27:02.495314] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:27.012 [2024-11-20 16:27:02.495585] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:27.012 [2024-11-20 16:27:02.496148] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:27.012 [2024-11-20 16:27:02.496195] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:27.273 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:27.273 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:27.273 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:27.273 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:27.273 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:27.273 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:27.273 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:27.273 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.273 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:27.273 [2024-11-20 16:27:03.120767] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:27.273 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.273 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:32:27.273 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:27.273 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:27.273 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:27.273 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:32:27.273 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:32:27.273 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.273 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:27.273 Malloc0 00:32:27.534 [2024-11-20 16:27:03.225117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:27.534 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.534 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:32:27.534 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:27.534 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:27.534 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1502195 00:32:27.534 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1502195 /var/tmp/bdevperf.sock 00:32:27.534 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1502195 ']' 00:32:27.534 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:27.534 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:27.534 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:27.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:27.534 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:27.534 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:27.534 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:32:27.534 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:27.534 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:27.534 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:27.534 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:27.534 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:27.534 { 00:32:27.534 "params": { 00:32:27.534 "name": "Nvme$subsystem", 00:32:27.534 "trtype": "$TEST_TRANSPORT", 00:32:27.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:27.534 "adrfam": "ipv4", 00:32:27.534 "trsvcid": "$NVMF_PORT", 00:32:27.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:27.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:27.534 "hdgst": ${hdgst:-false}, 00:32:27.534 "ddgst": ${ddgst:-false} 00:32:27.534 }, 00:32:27.534 "method": "bdev_nvme_attach_controller" 00:32:27.534 } 00:32:27.534 EOF 00:32:27.534 )") 00:32:27.534 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:27.534 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:27.534 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:27.534 16:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:27.534 "params": { 00:32:27.534 "name": "Nvme0", 00:32:27.534 "trtype": "tcp", 00:32:27.534 "traddr": "10.0.0.2", 00:32:27.534 "adrfam": "ipv4", 00:32:27.534 "trsvcid": "4420", 00:32:27.534 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:27.534 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:27.534 "hdgst": false, 00:32:27.534 "ddgst": false 00:32:27.534 }, 00:32:27.534 "method": "bdev_nvme_attach_controller" 00:32:27.534 }' 00:32:27.534 [2024-11-20 16:27:03.335532] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:32:27.534 [2024-11-20 16:27:03.335606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502195 ] 00:32:27.534 [2024-11-20 16:27:03.427248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.795 [2024-11-20 16:27:03.481284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.795 Running I/O for 10 seconds... 00:32:28.368 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:28.368 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:28.368 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:28.368 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.368 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:28.368 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.369 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:28.369 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:32:28.369 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:28.369 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:32:28.369 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:32:28.369 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:32:28.369 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:32:28.369 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:28.369 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:28.369 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.369 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:28.369 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:28.369 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.369 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:32:28.369 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:32:28.369 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:32:28.369 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:32:28.369 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:32:28.369 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:28.369 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.369 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:28.369 [2024-11-20 16:27:04.233003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ff20 is same with the state(6) to be set 00:32:28.369 [2024-11-20 16:27:04.233815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.369 [2024-11-20 16:27:04.233877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.369 [2024-11-20 16:27:04.233901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.369 [2024-11-20 16:27:04.233910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.369 [2024-11-20 16:27:04.233921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.369 [2024-11-20 16:27:04.233930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.369 [2024-11-20 16:27:04.233940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.369 [2024-11-20 16:27:04.233947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.369 [2024-11-20 16:27:04.233957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.369 [2024-11-20 16:27:04.233965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.369 [2024-11-20 16:27:04.233975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.369 [2024-11-20 16:27:04.233983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.369 [2024-11-20 16:27:04.233992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.369 [2024-11-20 16:27:04.234000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.369 [2024-11-20 16:27:04.234010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.369 [2024-11-20 16:27:04.234018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.369 [2024-11-20 16:27:04.234029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.369 [2024-11-20 16:27:04.234037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.370 [2024-11-20 16:27:04.234823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.370 [2024-11-20 16:27:04.234830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.371 [2024-11-20 16:27:04.234840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.371 [2024-11-20 16:27:04.234847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.371 [2024-11-20 16:27:04.234857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.371 [2024-11-20 16:27:04.234865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.371 [2024-11-20 16:27:04.234874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.371 [2024-11-20 16:27:04.234882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.371 [2024-11-20 16:27:04.234891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.371 [2024-11-20 16:27:04.234899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.371 [2024-11-20 16:27:04.234909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.371 [2024-11-20 16:27:04.234917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.371 [2024-11-20 16:27:04.234926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.371 [2024-11-20 16:27:04.234933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.371 [2024-11-20 16:27:04.234943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.371 [2024-11-20 16:27:04.234951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.371 [2024-11-20 16:27:04.234961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.371 [2024-11-20 16:27:04.234969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.371 [2024-11-20 16:27:04.234979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.371 [2024-11-20 16:27:04.234986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.371 [2024-11-20 16:27:04.234995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.371 [2024-11-20 16:27:04.235003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.371 [2024-11-20 16:27:04.235013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.371 [2024-11-20 16:27:04.235023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.371 [2024-11-20 16:27:04.235033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.371 [2024-11-20 16:27:04.235040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.371 [2024-11-20 16:27:04.235049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2303120 is same with the state(6) to be set 00:32:28.371 [2024-11-20 16:27:04.236368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:28.371 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.371 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:28.371 task offset: 106496 on job bdev=Nvme0n1 fails 00:32:28.371 00:32:28.371 Latency(us) 00:32:28.371 [2024-11-20T15:27:04.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.371 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:28.371 Job: Nvme0n1 ended in about 0.59 seconds with error 00:32:28.371 Verification LBA range: start 0x0 length 0x400 00:32:28.371 Nvme0n1 : 0.59 1413.79 88.36 108.75 0.00 41039.22 3904.85 35826.35 00:32:28.371 [2024-11-20T15:27:04.307Z] =================================================================================================================== 00:32:28.371 [2024-11-20T15:27:04.307Z] Total : 1413.79 88.36 108.75 0.00 41039.22 3904.85 35826.35 00:32:28.371 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.371 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:28.371 [2024-11-20 16:27:04.238615] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:28.371 [2024-11-20 16:27:04.238659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea000 (9): Bad file descriptor 00:32:28.371 [2024-11-20 16:27:04.240264] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:32:28.371 [2024-11-20 16:27:04.240371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:28.371 [2024-11-20 16:27:04.240415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.371 [2024-11-20 16:27:04.240436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:32:28.371 [2024-11-20 16:27:04.240447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:32:28.371 [2024-11-20 16:27:04.240454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:28.371 [2024-11-20 16:27:04.240462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20ea000 00:32:28.371 [2024-11-20 16:27:04.240488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea000 (9): Bad file descriptor 00:32:28.371 [2024-11-20 16:27:04.240518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:28.371 [2024-11-20 16:27:04.240527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:28.371 [2024-11-20 16:27:04.240538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:28.371 [2024-11-20 16:27:04.240556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:28.371 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.371 16:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:32:29.756 16:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1502195 00:32:29.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1502195) - No such process 00:32:29.756 16:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:32:29.756 16:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:32:29.756 16:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:32:29.756 16:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:32:29.756 16:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:29.756 16:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:29.756 16:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:29.756 16:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:29.756 { 00:32:29.756 "params": { 00:32:29.756 "name": "Nvme$subsystem", 00:32:29.756 "trtype": "$TEST_TRANSPORT", 00:32:29.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:29.756 "adrfam": "ipv4", 00:32:29.756 "trsvcid": "$NVMF_PORT", 00:32:29.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:29.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:29.756 "hdgst": ${hdgst:-false}, 00:32:29.756 "ddgst": ${ddgst:-false} 00:32:29.756 }, 00:32:29.756 "method": "bdev_nvme_attach_controller" 00:32:29.756 } 00:32:29.756 EOF 00:32:29.756 )") 00:32:29.756 16:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:29.756 16:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:29.756 16:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:29.757 16:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:29.757 "params": { 00:32:29.757 "name": "Nvme0", 00:32:29.757 "trtype": "tcp", 00:32:29.757 "traddr": "10.0.0.2", 00:32:29.757 "adrfam": "ipv4", 00:32:29.757 "trsvcid": "4420", 00:32:29.757 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:29.757 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:29.757 "hdgst": false, 00:32:29.757 "ddgst": false 00:32:29.757 }, 00:32:29.757 "method": "bdev_nvme_attach_controller" 00:32:29.757 }' 00:32:29.757 [2024-11-20 16:27:05.312625] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:32:29.757 [2024-11-20 16:27:05.312702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502606 ] 00:32:29.757 [2024-11-20 16:27:05.408317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.757 [2024-11-20 16:27:05.460645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:29.757 Running I/O for 1 seconds... 00:32:30.959 1495.00 IOPS, 93.44 MiB/s 00:32:30.959 Latency(us) 00:32:30.959 [2024-11-20T15:27:06.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.959 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:30.959 Verification LBA range: start 0x0 length 0x400 00:32:30.959 Nvme0n1 : 1.01 1548.16 96.76 0.00 0.00 40523.28 1938.77 38666.24 00:32:30.959 [2024-11-20T15:27:06.895Z] =================================================================================================================== 00:32:30.959 [2024-11-20T15:27:06.895Z] Total : 1548.16 96.76 0.00 0.00 40523.28 1938.77 38666.24 00:32:30.959 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:32:30.959 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:32:30.959 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:30.959 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:30.959 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:32:30.959 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:30.959 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:32:30.959 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:30.959 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:32:30.959 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:30.959 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:30.959 rmmod nvme_tcp 00:32:30.959 rmmod nvme_fabrics 00:32:30.959 rmmod nvme_keyring 00:32:30.959 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:30.959 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:32:30.959 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:32:30.959 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1501961 ']' 00:32:30.959 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1501961 00:32:30.959 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1501961 ']' 00:32:30.959 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1501961 00:32:30.959 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:32:30.959 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:30.959 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1501961 00:32:31.220 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:31.220 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:31.220 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1501961' 00:32:31.220 killing process with pid 1501961 00:32:31.220 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1501961 00:32:31.220 16:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1501961 00:32:31.220 [2024-11-20 16:27:07.012796] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:32:31.220 16:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:31.220 16:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:31.220 16:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:31.220 16:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:32:31.220 16:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:32:31.220 16:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:31.220 16:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:32:31.220 16:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:31.220 16:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:31.220 16:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.220 16:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:31.220 16:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.762 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:33.763 00:32:33.763 real 0m14.724s 00:32:33.763 user 0m19.247s 00:32:33.763 sys 0m7.455s 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:33.763 ************************************ 00:32:33.763 END TEST nvmf_host_management 00:32:33.763 ************************************ 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:33.763 ************************************ 00:32:33.763 START TEST nvmf_lvol 00:32:33.763 ************************************ 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:33.763 * Looking for test storage... 00:32:33.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:33.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.763 --rc genhtml_branch_coverage=1 00:32:33.763 --rc genhtml_function_coverage=1 00:32:33.763 --rc genhtml_legend=1 00:32:33.763 --rc geninfo_all_blocks=1 00:32:33.763 --rc geninfo_unexecuted_blocks=1 00:32:33.763 00:32:33.763 ' 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:33.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.763 --rc genhtml_branch_coverage=1 00:32:33.763 --rc genhtml_function_coverage=1 00:32:33.763 --rc genhtml_legend=1 00:32:33.763 --rc geninfo_all_blocks=1 00:32:33.763 --rc geninfo_unexecuted_blocks=1 00:32:33.763 00:32:33.763 ' 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:33.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.763 --rc genhtml_branch_coverage=1 00:32:33.763 --rc genhtml_function_coverage=1 00:32:33.763 --rc genhtml_legend=1 00:32:33.763 --rc geninfo_all_blocks=1 00:32:33.763 --rc geninfo_unexecuted_blocks=1 00:32:33.763 00:32:33.763 ' 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:33.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.763 --rc genhtml_branch_coverage=1 00:32:33.763 --rc genhtml_function_coverage=1 00:32:33.763 --rc genhtml_legend=1 00:32:33.763 --rc geninfo_all_blocks=1 00:32:33.763 --rc geninfo_unexecuted_blocks=1 00:32:33.763 00:32:33.763 ' 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.763 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:32:33.764 16:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:41.981 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:41.981 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:41.982 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:41.982 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:41.982 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:41.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:41.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:32:41.982 00:32:41.982 --- 10.0.0.2 ping statistics --- 00:32:41.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.982 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:41.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:41.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:32:41.982 00:32:41.982 --- 10.0.0.1 ping statistics --- 00:32:41.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.982 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:41.982 16:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:41.982 16:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:32:41.982 16:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:41.982 16:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:41.982 16:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:41.982 16:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1507486 00:32:41.982 16:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1507486 00:32:41.982 16:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:32:41.982 16:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1507486 ']' 00:32:41.982 16:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:41.982 16:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:41.982 16:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:41.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:41.982 16:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:41.982 16:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:41.982 [2024-11-20 16:27:17.092575] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:41.982 [2024-11-20 16:27:17.093687] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:32:41.982 [2024-11-20 16:27:17.093738] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:41.982 [2024-11-20 16:27:17.194811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:41.982 [2024-11-20 16:27:17.246898] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:41.982 [2024-11-20 16:27:17.246951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:41.982 [2024-11-20 16:27:17.246959] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:41.982 [2024-11-20 16:27:17.246966] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:41.982 [2024-11-20 16:27:17.246972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:41.982 [2024-11-20 16:27:17.248824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:41.982 [2024-11-20 16:27:17.248976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:41.982 [2024-11-20 16:27:17.248977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:41.983 [2024-11-20 16:27:17.325392] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:41.983 [2024-11-20 16:27:17.326344] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:41.983 [2024-11-20 16:27:17.326821] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:41.983 [2024-11-20 16:27:17.326940] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:42.277 16:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:42.277 16:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:32:42.277 16:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:42.277 16:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:42.277 16:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:42.277 16:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:42.277 16:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:42.277 [2024-11-20 16:27:18.117890] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:42.277 16:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:42.578 16:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:42.578 16:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:42.838 16:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:42.838 16:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:43.099 16:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:43.099 16:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=deb55ad8-7846-4526-bc68-d0fca8ca3ea2 00:32:43.099 16:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u deb55ad8-7846-4526-bc68-d0fca8ca3ea2 lvol 20 00:32:43.368 16:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d000397a-d0ec-48b5-9ade-21ac176cce66 00:32:43.368 16:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:43.631 16:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d000397a-d0ec-48b5-9ade-21ac176cce66 00:32:43.631 16:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:43.892 [2024-11-20 16:27:19.689823] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.892 16:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:44.153 16:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1508131 00:32:44.153 16:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:44.153 16:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:45.096 16:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d000397a-d0ec-48b5-9ade-21ac176cce66 MY_SNAPSHOT 00:32:45.357 16:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=274597f3-5c98-4d54-b7c2-94803938070a 00:32:45.357 16:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d000397a-d0ec-48b5-9ade-21ac176cce66 30 00:32:45.618 16:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 274597f3-5c98-4d54-b7c2-94803938070a MY_CLONE 00:32:45.879 16:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b1ecb93f-8db5-4dac-af7f-94bd611e6347 00:32:45.879 16:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b1ecb93f-8db5-4dac-af7f-94bd611e6347 00:32:46.452 16:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1508131 00:32:54.596 Initializing NVMe Controllers 00:32:54.596 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:54.596 Controller IO queue size 128, less than required. 00:32:54.596 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:54.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:54.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:54.596 Initialization complete. Launching workers. 00:32:54.596 ======================================================== 00:32:54.596 Latency(us) 00:32:54.596 Device Information : IOPS MiB/s Average min max 00:32:54.596 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15298.66 59.76 8369.26 1909.12 56026.27 00:32:54.596 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 14990.16 58.56 8537.79 2375.42 71024.65 00:32:54.596 ======================================================== 00:32:54.596 Total : 30288.82 118.32 8452.67 1909.12 71024.65 00:32:54.596 00:32:54.596 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:54.596 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d000397a-d0ec-48b5-9ade-21ac176cce66 00:32:54.857 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u deb55ad8-7846-4526-bc68-d0fca8ca3ea2 00:32:54.857 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:54.857 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:54.857 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:54.857 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:54.857 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:54.857 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:54.857 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:54.857 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:54.857 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:54.857 rmmod nvme_tcp 00:32:54.857 rmmod nvme_fabrics 00:32:54.857 rmmod nvme_keyring 00:32:54.857 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:54.857 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:54.857 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:54.857 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1507486 ']' 00:32:54.857 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1507486 00:32:54.857 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1507486 ']' 00:32:54.857 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1507486 00:32:54.857 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:32:55.118 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:55.118 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1507486 00:32:55.118 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:55.118 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:55.118 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1507486' 00:32:55.118 killing process with pid 1507486 00:32:55.118 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1507486 00:32:55.118 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1507486 00:32:55.118 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:55.118 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:55.118 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:55.118 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:55.118 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:55.118 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:55.118 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:55.118 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:55.118 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:55.118 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:55.118 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:55.118 16:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:57.668 00:32:57.668 real 0m23.849s 00:32:57.668 user 0m55.746s 00:32:57.668 sys 0m10.689s 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:57.668 ************************************ 00:32:57.668 END TEST nvmf_lvol 00:32:57.668 ************************************ 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:57.668 ************************************ 00:32:57.668 START TEST nvmf_lvs_grow 00:32:57.668 ************************************ 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:57.668 * Looking for test storage... 00:32:57.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:57.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.668 --rc genhtml_branch_coverage=1 00:32:57.668 --rc genhtml_function_coverage=1 00:32:57.668 --rc genhtml_legend=1 00:32:57.668 --rc geninfo_all_blocks=1 00:32:57.668 --rc geninfo_unexecuted_blocks=1 00:32:57.668 00:32:57.668 ' 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:57.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.668 --rc genhtml_branch_coverage=1 00:32:57.668 --rc genhtml_function_coverage=1 00:32:57.668 --rc genhtml_legend=1 00:32:57.668 --rc geninfo_all_blocks=1 00:32:57.668 --rc geninfo_unexecuted_blocks=1 00:32:57.668 00:32:57.668 ' 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:57.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.668 --rc genhtml_branch_coverage=1 00:32:57.668 --rc genhtml_function_coverage=1 00:32:57.668 --rc genhtml_legend=1 00:32:57.668 --rc geninfo_all_blocks=1 00:32:57.668 --rc geninfo_unexecuted_blocks=1 00:32:57.668 00:32:57.668 ' 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:57.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.668 --rc genhtml_branch_coverage=1 00:32:57.668 --rc genhtml_function_coverage=1 00:32:57.668 --rc genhtml_legend=1 00:32:57.668 --rc geninfo_all_blocks=1 00:32:57.668 --rc geninfo_unexecuted_blocks=1 00:32:57.668 00:32:57.668 ' 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:57.668 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:57.669 16:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:05.820 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:05.820 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:33:05.820 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:05.820 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:05.820 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:05.820 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:05.820 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:05.820 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:33:05.820 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:05.821 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:05.821 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:05.821 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:05.821 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:05.821 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:05.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:05.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:33:05.822 00:33:05.822 --- 10.0.0.2 ping statistics --- 00:33:05.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:05.822 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:05.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:05.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:33:05.822 00:33:05.822 --- 10.0.0.1 ping statistics --- 00:33:05.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:05.822 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1514213 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1514213 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1514213 ']' 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:05.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:05.822 16:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:05.822 [2024-11-20 16:27:40.900318] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:05.822 [2024-11-20 16:27:40.901464] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:33:05.822 [2024-11-20 16:27:40.901516] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:05.822 [2024-11-20 16:27:41.001666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.822 [2024-11-20 16:27:41.053955] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:05.822 [2024-11-20 16:27:41.054011] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:05.822 [2024-11-20 16:27:41.054020] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:05.822 [2024-11-20 16:27:41.054027] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:05.822 [2024-11-20 16:27:41.054033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:05.822 [2024-11-20 16:27:41.054781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:05.822 [2024-11-20 16:27:41.131860] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:05.822 [2024-11-20 16:27:41.132148] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:05.822 16:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:05.822 16:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:33:05.822 16:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:05.822 16:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:05.822 16:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:06.083 16:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:06.083 16:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:06.083 [2024-11-20 16:27:41.943686] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:06.083 16:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:33:06.083 16:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:06.083 16:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:06.083 16:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:06.083 ************************************ 00:33:06.083 START TEST lvs_grow_clean 00:33:06.083 ************************************ 00:33:06.083 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:33:06.083 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:06.083 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:06.083 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:06.083 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:06.083 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:06.084 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:06.084 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:06.084 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:06.344 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:06.344 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:06.344 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:06.605 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b686df1a-74a6-45d0-acc0-34d604c2b671 00:33:06.605 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b686df1a-74a6-45d0-acc0-34d604c2b671 00:33:06.605 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:06.866 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:06.866 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:06.866 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b686df1a-74a6-45d0-acc0-34d604c2b671 lvol 150 00:33:06.866 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c039e159-2d70-48c4-9ead-ad648423a759 00:33:06.866 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:06.866 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:07.126 [2024-11-20 16:27:42.955382] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:07.126 [2024-11-20 16:27:42.955558] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:07.126 true 00:33:07.126 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b686df1a-74a6-45d0-acc0-34d604c2b671 00:33:07.126 16:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:07.388 16:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:07.388 16:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:07.648 16:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c039e159-2d70-48c4-9ead-ad648423a759 00:33:07.648 16:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:07.909 [2024-11-20 16:27:43.708071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:07.910 16:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:08.171 16:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:08.171 16:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1514911 00:33:08.171 16:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:08.171 16:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1514911 /var/tmp/bdevperf.sock 00:33:08.171 16:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1514911 ']' 00:33:08.171 16:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:08.171 16:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:08.171 16:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:08.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:08.171 16:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:08.171 16:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:08.172 [2024-11-20 16:27:43.930002] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:33:08.172 [2024-11-20 16:27:43.930065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1514911 ] 00:33:08.172 [2024-11-20 16:27:44.020437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.172 [2024-11-20 16:27:44.071872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:09.115 16:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:09.115 16:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:33:09.115 16:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:09.115 Nvme0n1 00:33:09.376 16:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:09.376 [ 00:33:09.376 { 00:33:09.376 "name": "Nvme0n1", 00:33:09.376 "aliases": [ 00:33:09.376 "c039e159-2d70-48c4-9ead-ad648423a759" 00:33:09.376 ], 00:33:09.376 "product_name": "NVMe disk", 00:33:09.376 "block_size": 4096, 00:33:09.376 "num_blocks": 38912, 00:33:09.376 "uuid": "c039e159-2d70-48c4-9ead-ad648423a759", 00:33:09.376 "numa_id": 0, 00:33:09.376 "assigned_rate_limits": { 00:33:09.376 "rw_ios_per_sec": 0, 00:33:09.376 "rw_mbytes_per_sec": 0, 00:33:09.376 "r_mbytes_per_sec": 0, 00:33:09.376 "w_mbytes_per_sec": 0 00:33:09.376 }, 00:33:09.376 "claimed": false, 00:33:09.376 "zoned": false, 00:33:09.376 "supported_io_types": { 00:33:09.376 "read": true, 00:33:09.376 "write": true, 00:33:09.376 "unmap": true, 00:33:09.376 "flush": true, 00:33:09.376 "reset": true, 00:33:09.376 "nvme_admin": true, 00:33:09.376 "nvme_io": true, 00:33:09.376 "nvme_io_md": false, 00:33:09.376 "write_zeroes": true, 00:33:09.376 "zcopy": false, 00:33:09.376 "get_zone_info": false, 00:33:09.376 "zone_management": false, 00:33:09.376 "zone_append": false, 00:33:09.376 "compare": true, 00:33:09.376 "compare_and_write": true, 00:33:09.376 "abort": true, 00:33:09.376 "seek_hole": false, 00:33:09.376 "seek_data": false, 00:33:09.376 "copy": true, 00:33:09.376 "nvme_iov_md": false 00:33:09.376 }, 00:33:09.376 "memory_domains": [ 00:33:09.376 { 00:33:09.376 "dma_device_id": "system", 00:33:09.376 "dma_device_type": 1 00:33:09.376 } 00:33:09.376 ], 00:33:09.376 "driver_specific": { 00:33:09.376 "nvme": [ 00:33:09.376 { 00:33:09.376 "trid": { 00:33:09.376 "trtype": "TCP", 00:33:09.376 "adrfam": "IPv4", 00:33:09.376 "traddr": "10.0.0.2", 00:33:09.376 "trsvcid": "4420", 00:33:09.376 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:09.376 }, 00:33:09.376 "ctrlr_data": { 00:33:09.376 "cntlid": 1, 00:33:09.376 "vendor_id": "0x8086", 00:33:09.376 "model_number": "SPDK bdev Controller", 00:33:09.376 "serial_number": "SPDK0", 00:33:09.376 "firmware_revision": "25.01", 00:33:09.376 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:09.376 "oacs": { 00:33:09.376 "security": 0, 00:33:09.376 "format": 0, 00:33:09.376 "firmware": 0, 00:33:09.376 "ns_manage": 0 00:33:09.376 }, 00:33:09.376 "multi_ctrlr": true, 00:33:09.376 "ana_reporting": false 00:33:09.376 }, 00:33:09.376 "vs": { 00:33:09.376 "nvme_version": "1.3" 00:33:09.376 }, 00:33:09.376 "ns_data": { 00:33:09.376 "id": 1, 00:33:09.376 "can_share": true 00:33:09.376 } 00:33:09.376 } 00:33:09.376 ], 00:33:09.376 "mp_policy": "active_passive" 00:33:09.376 } 00:33:09.376 } 00:33:09.376 ] 00:33:09.376 16:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1515232 00:33:09.376 16:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:09.376 16:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:09.637 Running I/O for 10 seconds... 00:33:10.580 Latency(us) 00:33:10.580 [2024-11-20T15:27:46.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:10.580 Nvme0n1 : 1.00 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:33:10.580 [2024-11-20T15:27:46.516Z] =================================================================================================================== 00:33:10.580 [2024-11-20T15:27:46.516Z] Total : 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:33:10.580 00:33:11.523 16:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b686df1a-74a6-45d0-acc0-34d604c2b671 00:33:11.523 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:11.523 Nvme0n1 : 2.00 16954.50 66.23 0.00 0.00 0.00 0.00 0.00 00:33:11.523 [2024-11-20T15:27:47.459Z] =================================================================================================================== 00:33:11.523 [2024-11-20T15:27:47.459Z] Total : 16954.50 66.23 0.00 0.00 0.00 0.00 0.00 00:33:11.523 00:33:11.523 true 00:33:11.523 16:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:11.523 16:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b686df1a-74a6-45d0-acc0-34d604c2b671 00:33:11.784 16:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:11.784 16:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:11.784 16:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1515232 00:33:12.726 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:12.726 Nvme0n1 : 3.00 17187.33 67.14 0.00 0.00 0.00 0.00 0.00 00:33:12.726 [2024-11-20T15:27:48.662Z] =================================================================================================================== 00:33:12.726 [2024-11-20T15:27:48.662Z] Total : 17187.33 67.14 0.00 0.00 0.00 0.00 0.00 00:33:12.726 00:33:13.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:13.668 Nvme0n1 : 4.00 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:33:13.668 [2024-11-20T15:27:49.604Z] =================================================================================================================== 00:33:13.668 [2024-11-20T15:27:49.604Z] Total : 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:33:13.668 00:33:14.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:14.609 Nvme0n1 : 5.00 19291.40 75.36 0.00 0.00 0.00 0.00 0.00 00:33:14.609 [2024-11-20T15:27:50.545Z] =================================================================================================================== 00:33:14.609 [2024-11-20T15:27:50.545Z] Total : 19291.40 75.36 0.00 0.00 0.00 0.00 0.00 00:33:14.609 00:33:15.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:15.550 Nvme0n1 : 6.00 20307.00 79.32 0.00 0.00 0.00 0.00 0.00 00:33:15.550 [2024-11-20T15:27:51.486Z] =================================================================================================================== 00:33:15.550 [2024-11-20T15:27:51.486Z] Total : 20307.00 79.32 0.00 0.00 0.00 0.00 0.00 00:33:15.550 00:33:16.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:16.492 Nvme0n1 : 7.00 21034.57 82.17 0.00 0.00 0.00 0.00 0.00 00:33:16.492 [2024-11-20T15:27:52.428Z] =================================================================================================================== 00:33:16.492 [2024-11-20T15:27:52.428Z] Total : 21034.57 82.17 0.00 0.00 0.00 0.00 0.00 00:33:16.492 00:33:17.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:17.433 Nvme0n1 : 8.00 21580.25 84.30 0.00 0.00 0.00 0.00 0.00 00:33:17.433 [2024-11-20T15:27:53.369Z] =================================================================================================================== 00:33:17.433 [2024-11-20T15:27:53.369Z] Total : 21580.25 84.30 0.00 0.00 0.00 0.00 0.00 00:33:17.433 00:33:18.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:18.818 Nvme0n1 : 9.00 22004.67 85.96 0.00 0.00 0.00 0.00 0.00 00:33:18.818 [2024-11-20T15:27:54.754Z] =================================================================================================================== 00:33:18.818 [2024-11-20T15:27:54.754Z] Total : 22004.67 85.96 0.00 0.00 0.00 0.00 0.00 00:33:18.818 00:33:19.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:19.760 Nvme0n1 : 10.00 22352.30 87.31 0.00 0.00 0.00 0.00 0.00 00:33:19.760 [2024-11-20T15:27:55.696Z] =================================================================================================================== 00:33:19.760 [2024-11-20T15:27:55.696Z] Total : 22352.30 87.31 0.00 0.00 0.00 0.00 0.00 00:33:19.760 00:33:19.760 00:33:19.760 Latency(us) 00:33:19.760 [2024-11-20T15:27:55.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:19.760 Nvme0n1 : 10.00 22351.14 87.31 0.00 0.00 5723.62 2880.85 31894.19 00:33:19.760 [2024-11-20T15:27:55.696Z] =================================================================================================================== 00:33:19.760 [2024-11-20T15:27:55.696Z] Total : 22351.14 87.31 0.00 0.00 5723.62 2880.85 31894.19 00:33:19.760 { 00:33:19.760 "results": [ 00:33:19.760 { 00:33:19.760 "job": "Nvme0n1", 00:33:19.760 "core_mask": "0x2", 00:33:19.760 "workload": "randwrite", 00:33:19.760 "status": "finished", 00:33:19.760 "queue_depth": 128, 00:33:19.760 "io_size": 4096, 00:33:19.760 "runtime": 10.002623, 00:33:19.760 "iops": 22351.13729668708, 00:33:19.760 "mibps": 87.30913006518391, 00:33:19.760 "io_failed": 0, 00:33:19.760 "io_timeout": 0, 00:33:19.760 "avg_latency_us": 5723.622068315665, 00:33:19.760 "min_latency_us": 2880.8533333333335, 00:33:19.760 "max_latency_us": 31894.18666666667 00:33:19.760 } 00:33:19.760 ], 00:33:19.760 "core_count": 1 00:33:19.760 } 00:33:19.760 16:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1514911 00:33:19.760 16:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1514911 ']' 00:33:19.760 16:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1514911 00:33:19.760 16:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:33:19.760 16:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:19.760 16:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1514911 00:33:19.760 16:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:19.760 16:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:19.760 16:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1514911' 00:33:19.760 killing process with pid 1514911 00:33:19.760 16:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1514911 00:33:19.760 Received shutdown signal, test time was about 10.000000 seconds 00:33:19.760 00:33:19.760 Latency(us) 00:33:19.760 [2024-11-20T15:27:55.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.760 [2024-11-20T15:27:55.696Z] =================================================================================================================== 00:33:19.760 [2024-11-20T15:27:55.696Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:19.760 16:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1514911 00:33:19.760 16:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:20.021 16:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:20.021 16:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b686df1a-74a6-45d0-acc0-34d604c2b671 00:33:20.021 16:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:20.281 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:20.281 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:33:20.281 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:20.542 [2024-11-20 16:27:56.295457] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:20.542 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b686df1a-74a6-45d0-acc0-34d604c2b671 00:33:20.542 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:33:20.542 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b686df1a-74a6-45d0-acc0-34d604c2b671 00:33:20.542 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:20.542 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:20.542 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:20.542 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:20.542 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:20.542 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:20.542 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:20.542 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:20.542 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b686df1a-74a6-45d0-acc0-34d604c2b671 00:33:20.803 request: 00:33:20.803 { 00:33:20.803 "uuid": "b686df1a-74a6-45d0-acc0-34d604c2b671", 00:33:20.803 "method": "bdev_lvol_get_lvstores", 00:33:20.803 "req_id": 1 00:33:20.803 } 00:33:20.803 Got JSON-RPC error response 00:33:20.803 response: 00:33:20.803 { 00:33:20.803 "code": -19, 00:33:20.803 "message": "No such device" 00:33:20.803 } 00:33:20.803 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:33:20.803 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:20.803 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:20.803 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:20.803 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:20.803 aio_bdev 00:33:20.803 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c039e159-2d70-48c4-9ead-ad648423a759 00:33:20.803 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=c039e159-2d70-48c4-9ead-ad648423a759 00:33:20.803 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:20.803 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:33:20.803 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:20.803 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:20.803 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:21.064 16:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c039e159-2d70-48c4-9ead-ad648423a759 -t 2000 00:33:21.325 [ 00:33:21.325 { 00:33:21.325 "name": "c039e159-2d70-48c4-9ead-ad648423a759", 00:33:21.325 "aliases": [ 00:33:21.325 "lvs/lvol" 00:33:21.325 ], 00:33:21.325 "product_name": "Logical Volume", 00:33:21.325 "block_size": 4096, 00:33:21.325 "num_blocks": 38912, 00:33:21.325 "uuid": "c039e159-2d70-48c4-9ead-ad648423a759", 00:33:21.326 "assigned_rate_limits": { 00:33:21.326 "rw_ios_per_sec": 0, 00:33:21.326 "rw_mbytes_per_sec": 0, 00:33:21.326 "r_mbytes_per_sec": 0, 00:33:21.326 "w_mbytes_per_sec": 0 00:33:21.326 }, 00:33:21.326 "claimed": false, 00:33:21.326 "zoned": false, 00:33:21.326 "supported_io_types": { 00:33:21.326 "read": true, 00:33:21.326 "write": true, 00:33:21.326 "unmap": true, 00:33:21.326 "flush": false, 00:33:21.326 "reset": true, 00:33:21.326 "nvme_admin": false, 00:33:21.326 "nvme_io": false, 00:33:21.326 "nvme_io_md": false, 00:33:21.326 "write_zeroes": true, 00:33:21.326 "zcopy": false, 00:33:21.326 "get_zone_info": false, 00:33:21.326 "zone_management": false, 00:33:21.326 "zone_append": false, 00:33:21.326 "compare": false, 00:33:21.326 "compare_and_write": false, 00:33:21.326 "abort": false, 00:33:21.326 "seek_hole": true, 00:33:21.326 "seek_data": true, 00:33:21.326 "copy": false, 00:33:21.326 "nvme_iov_md": false 00:33:21.326 }, 00:33:21.326 "driver_specific": { 00:33:21.326 "lvol": { 00:33:21.326 "lvol_store_uuid": "b686df1a-74a6-45d0-acc0-34d604c2b671", 00:33:21.326 "base_bdev": "aio_bdev", 00:33:21.326 "thin_provision": false, 00:33:21.326 "num_allocated_clusters": 38, 00:33:21.326 "snapshot": false, 00:33:21.326 "clone": false, 00:33:21.326 "esnap_clone": false 00:33:21.326 } 00:33:21.326 } 00:33:21.326 } 00:33:21.326 ] 00:33:21.326 16:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:33:21.326 16:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:21.326 16:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b686df1a-74a6-45d0-acc0-34d604c2b671 00:33:21.326 16:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:21.326 16:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b686df1a-74a6-45d0-acc0-34d604c2b671 00:33:21.326 16:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:21.587 16:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:21.587 16:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c039e159-2d70-48c4-9ead-ad648423a759 00:33:21.847 16:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b686df1a-74a6-45d0-acc0-34d604c2b671 00:33:22.108 16:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:22.108 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:22.368 00:33:22.368 real 0m16.040s 00:33:22.368 user 0m15.649s 00:33:22.368 sys 0m1.496s 00:33:22.368 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:22.368 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:22.368 ************************************ 00:33:22.368 END TEST lvs_grow_clean 00:33:22.368 ************************************ 00:33:22.368 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:33:22.368 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:22.368 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:22.368 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:22.368 ************************************ 00:33:22.368 START TEST lvs_grow_dirty 00:33:22.368 ************************************ 00:33:22.368 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:33:22.368 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:22.368 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:22.369 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:22.369 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:22.369 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:22.369 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:22.369 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:22.369 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:22.369 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:22.629 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:22.629 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:22.629 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8e8d735e-26e6-45fa-b87c-9015ab927013 00:33:22.629 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e8d735e-26e6-45fa-b87c-9015ab927013 00:33:22.629 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:22.933 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:22.933 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:22.933 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8e8d735e-26e6-45fa-b87c-9015ab927013 lvol 150 00:33:23.239 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=df05be51-43eb-4164-b63e-f30275cd1088 00:33:23.239 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:23.239 16:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:23.239 [2024-11-20 16:27:59.059381] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:23.239 [2024-11-20 16:27:59.059554] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:23.239 true 00:33:23.239 16:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e8d735e-26e6-45fa-b87c-9015ab927013 00:33:23.239 16:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:23.499 16:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:23.499 16:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:23.761 16:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 df05be51-43eb-4164-b63e-f30275cd1088 00:33:23.761 16:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:24.021 [2024-11-20 16:27:59.784005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:24.021 16:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:24.021 16:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:24.021 16:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1517988 00:33:24.282 16:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:24.282 16:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1517988 /var/tmp/bdevperf.sock 00:33:24.282 16:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1517988 ']' 00:33:24.282 16:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:24.282 16:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:24.282 16:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:24.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:24.282 16:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:24.282 16:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:24.282 [2024-11-20 16:27:59.986151] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:33:24.282 [2024-11-20 16:27:59.986218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1517988 ] 00:33:24.282 [2024-11-20 16:28:00.072766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.282 [2024-11-20 16:28:00.104049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:24.282 16:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:24.282 16:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:24.282 16:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:24.854 Nvme0n1 00:33:24.854 16:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:24.854 [ 00:33:24.854 { 00:33:24.854 "name": "Nvme0n1", 00:33:24.854 "aliases": [ 00:33:24.854 "df05be51-43eb-4164-b63e-f30275cd1088" 00:33:24.854 ], 00:33:24.854 "product_name": "NVMe disk", 00:33:24.854 "block_size": 4096, 00:33:24.854 "num_blocks": 38912, 00:33:24.854 "uuid": "df05be51-43eb-4164-b63e-f30275cd1088", 00:33:24.854 "numa_id": 0, 00:33:24.854 "assigned_rate_limits": { 00:33:24.854 "rw_ios_per_sec": 0, 00:33:24.854 "rw_mbytes_per_sec": 0, 00:33:24.854 "r_mbytes_per_sec": 0, 00:33:24.854 "w_mbytes_per_sec": 0 00:33:24.854 }, 00:33:24.854 "claimed": false, 00:33:24.854 "zoned": false, 00:33:24.854 "supported_io_types": { 00:33:24.854 "read": true, 00:33:24.854 "write": true, 00:33:24.854 "unmap": true, 00:33:24.854 "flush": true, 00:33:24.854 "reset": true, 00:33:24.854 "nvme_admin": true, 00:33:24.854 "nvme_io": true, 00:33:24.854 "nvme_io_md": false, 00:33:24.854 "write_zeroes": true, 00:33:24.854 "zcopy": false, 00:33:24.854 "get_zone_info": false, 00:33:24.854 "zone_management": false, 00:33:24.854 "zone_append": false, 00:33:24.854 "compare": true, 00:33:24.854 "compare_and_write": true, 00:33:24.854 "abort": true, 00:33:24.854 "seek_hole": false, 00:33:24.854 "seek_data": false, 00:33:24.854 "copy": true, 00:33:24.854 "nvme_iov_md": false 00:33:24.854 }, 00:33:24.854 "memory_domains": [ 00:33:24.854 { 00:33:24.854 "dma_device_id": "system", 00:33:24.854 "dma_device_type": 1 00:33:24.854 } 00:33:24.854 ], 00:33:24.854 "driver_specific": { 00:33:24.854 "nvme": [ 00:33:24.854 { 00:33:24.854 "trid": { 00:33:24.854 "trtype": "TCP", 00:33:24.854 "adrfam": "IPv4", 00:33:24.854 "traddr": "10.0.0.2", 00:33:24.854 "trsvcid": "4420", 00:33:24.854 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:24.854 }, 00:33:24.854 "ctrlr_data": { 00:33:24.854 "cntlid": 1, 00:33:24.854 "vendor_id": "0x8086", 00:33:24.854 "model_number": "SPDK bdev Controller", 00:33:24.854 "serial_number": "SPDK0", 00:33:24.854 "firmware_revision": "25.01", 00:33:24.854 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:24.854 "oacs": { 00:33:24.854 "security": 0, 00:33:24.854 "format": 0, 00:33:24.854 "firmware": 0, 00:33:24.854 "ns_manage": 0 00:33:24.854 }, 00:33:24.854 "multi_ctrlr": true, 00:33:24.854 "ana_reporting": false 00:33:24.854 }, 00:33:24.854 "vs": { 00:33:24.854 "nvme_version": "1.3" 00:33:24.854 }, 00:33:24.854 "ns_data": { 00:33:24.854 "id": 1, 00:33:24.854 "can_share": true 00:33:24.854 } 00:33:24.854 } 00:33:24.854 ], 00:33:24.854 "mp_policy": "active_passive" 00:33:24.854 } 00:33:24.854 } 00:33:24.854 ] 00:33:25.115 16:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1518017 00:33:25.115 16:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:25.115 16:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:25.115 Running I/O for 10 seconds... 00:33:26.055 Latency(us) 00:33:26.055 [2024-11-20T15:28:01.991Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:26.055 Nvme0n1 : 1.00 17399.00 67.96 0.00 0.00 0.00 0.00 0.00 00:33:26.055 [2024-11-20T15:28:01.991Z] =================================================================================================================== 00:33:26.055 [2024-11-20T15:28:01.991Z] Total : 17399.00 67.96 0.00 0.00 0.00 0.00 0.00 00:33:26.055 00:33:26.995 16:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8e8d735e-26e6-45fa-b87c-9015ab927013 00:33:26.995 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:26.995 Nvme0n1 : 2.00 17647.00 68.93 0.00 0.00 0.00 0.00 0.00 00:33:26.995 [2024-11-20T15:28:02.931Z] =================================================================================================================== 00:33:26.995 [2024-11-20T15:28:02.931Z] Total : 17647.00 68.93 0.00 0.00 0.00 0.00 0.00 00:33:26.995 00:33:27.255 true 00:33:27.255 16:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e8d735e-26e6-45fa-b87c-9015ab927013 00:33:27.255 16:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:27.255 16:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:27.255 16:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:27.255 16:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1518017 00:33:28.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:28.195 Nvme0n1 : 3.00 17737.67 69.29 0.00 0.00 0.00 0.00 0.00 00:33:28.195 [2024-11-20T15:28:04.131Z] =================================================================================================================== 00:33:28.195 [2024-11-20T15:28:04.132Z] Total : 17737.67 69.29 0.00 0.00 0.00 0.00 0.00 00:33:28.196 00:33:29.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:29.138 Nvme0n1 : 4.00 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:33:29.138 [2024-11-20T15:28:05.074Z] =================================================================================================================== 00:33:29.138 [2024-11-20T15:28:05.074Z] Total : 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:33:29.138 00:33:30.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:30.078 Nvme0n1 : 5.00 18224.60 71.19 0.00 0.00 0.00 0.00 0.00 00:33:30.078 [2024-11-20T15:28:06.014Z] =================================================================================================================== 00:33:30.078 [2024-11-20T15:28:06.014Z] Total : 18224.60 71.19 0.00 0.00 0.00 0.00 0.00 00:33:30.078 00:33:31.020 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:31.020 Nvme0n1 : 6.00 19418.00 75.85 0.00 0.00 0.00 0.00 0.00 00:33:31.020 [2024-11-20T15:28:06.956Z] =================================================================================================================== 00:33:31.020 [2024-11-20T15:28:06.956Z] Total : 19418.00 75.85 0.00 0.00 0.00 0.00 0.00 00:33:31.020 00:33:31.961 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:31.961 Nvme0n1 : 7.00 20263.57 79.15 0.00 0.00 0.00 0.00 0.00 00:33:31.961 [2024-11-20T15:28:07.897Z] =================================================================================================================== 00:33:31.961 [2024-11-20T15:28:07.897Z] Total : 20263.57 79.15 0.00 0.00 0.00 0.00 0.00 00:33:31.961 00:33:33.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:33.347 Nvme0n1 : 8.00 20903.75 81.66 0.00 0.00 0.00 0.00 0.00 00:33:33.347 [2024-11-20T15:28:09.283Z] =================================================================================================================== 00:33:33.347 [2024-11-20T15:28:09.283Z] Total : 20903.75 81.66 0.00 0.00 0.00 0.00 0.00 00:33:33.347 00:33:34.290 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:34.290 Nvme0n1 : 9.00 21403.33 83.61 0.00 0.00 0.00 0.00 0.00 00:33:34.290 [2024-11-20T15:28:10.226Z] =================================================================================================================== 00:33:34.290 [2024-11-20T15:28:10.226Z] Total : 21403.33 83.61 0.00 0.00 0.00 0.00 0.00 00:33:34.290 00:33:35.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:35.232 Nvme0n1 : 10.00 21803.00 85.17 0.00 0.00 0.00 0.00 0.00 00:33:35.232 [2024-11-20T15:28:11.168Z] =================================================================================================================== 00:33:35.232 [2024-11-20T15:28:11.168Z] Total : 21803.00 85.17 0.00 0.00 0.00 0.00 0.00 00:33:35.232 00:33:35.232 00:33:35.232 Latency(us) 00:33:35.232 [2024-11-20T15:28:11.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:35.232 Nvme0n1 : 10.01 21804.29 85.17 0.00 0.00 5867.47 2894.51 31894.19 00:33:35.232 [2024-11-20T15:28:11.168Z] =================================================================================================================== 00:33:35.232 [2024-11-20T15:28:11.168Z] Total : 21804.29 85.17 0.00 0.00 5867.47 2894.51 31894.19 00:33:35.232 { 00:33:35.232 "results": [ 00:33:35.232 { 00:33:35.232 "job": "Nvme0n1", 00:33:35.232 "core_mask": "0x2", 00:33:35.232 "workload": "randwrite", 00:33:35.232 "status": "finished", 00:33:35.232 "queue_depth": 128, 00:33:35.232 "io_size": 4096, 00:33:35.232 "runtime": 10.005277, 00:33:35.232 "iops": 21804.293874122624, 00:33:35.232 "mibps": 85.1730229457915, 00:33:35.232 "io_failed": 0, 00:33:35.232 "io_timeout": 0, 00:33:35.232 "avg_latency_us": 5867.472842496417, 00:33:35.232 "min_latency_us": 2894.5066666666667, 00:33:35.232 "max_latency_us": 31894.18666666667 00:33:35.232 } 00:33:35.232 ], 00:33:35.232 "core_count": 1 00:33:35.232 } 00:33:35.232 16:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1517988 00:33:35.232 16:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1517988 ']' 00:33:35.232 16:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1517988 00:33:35.232 16:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:33:35.232 16:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:35.232 16:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1517988 00:33:35.232 16:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:35.232 16:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:35.232 16:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1517988' 00:33:35.233 killing process with pid 1517988 00:33:35.233 16:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1517988 00:33:35.233 Received shutdown signal, test time was about 10.000000 seconds 00:33:35.233 00:33:35.233 Latency(us) 00:33:35.233 [2024-11-20T15:28:11.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.233 [2024-11-20T15:28:11.169Z] =================================================================================================================== 00:33:35.233 [2024-11-20T15:28:11.169Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:35.233 16:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1517988 00:33:35.233 16:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:35.494 16:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:35.754 16:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e8d735e-26e6-45fa-b87c-9015ab927013 00:33:35.754 16:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:35.754 16:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:35.754 16:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:33:35.754 16:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1514213 00:33:35.754 16:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1514213 00:33:35.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1514213 Killed "${NVMF_APP[@]}" "$@" 00:33:35.754 16:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:33:35.754 16:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:33:35.754 16:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:35.754 16:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:35.754 16:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:35.754 16:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1520078 00:33:35.754 16:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1520078 00:33:35.754 16:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1520078 ']' 00:33:35.754 16:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:35.754 16:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:35.754 16:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:35.754 16:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:35.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:35.754 16:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:35.754 16:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:36.015 [2024-11-20 16:28:11.718045] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:36.015 [2024-11-20 16:28:11.719599] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:33:36.015 [2024-11-20 16:28:11.719660] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:36.015 [2024-11-20 16:28:11.815712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.016 [2024-11-20 16:28:11.852651] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:36.016 [2024-11-20 16:28:11.852691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:36.016 [2024-11-20 16:28:11.852697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:36.016 [2024-11-20 16:28:11.852702] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:36.016 [2024-11-20 16:28:11.852707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:36.016 [2024-11-20 16:28:11.853254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:36.016 [2024-11-20 16:28:11.906489] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:36.016 [2024-11-20 16:28:11.906700] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:36.959 16:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:36.959 16:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:36.959 16:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:36.959 16:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:36.959 16:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:36.959 16:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:36.959 16:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:36.959 [2024-11-20 16:28:12.727364] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:36.959 [2024-11-20 16:28:12.727574] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:36.959 [2024-11-20 16:28:12.727663] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:36.959 16:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:33:36.959 16:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev df05be51-43eb-4164-b63e-f30275cd1088 00:33:36.959 16:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=df05be51-43eb-4164-b63e-f30275cd1088 00:33:36.959 16:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:36.959 16:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:36.959 16:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:36.959 16:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:36.959 16:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:37.221 16:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b df05be51-43eb-4164-b63e-f30275cd1088 -t 2000 00:33:37.221 [ 00:33:37.221 { 00:33:37.221 "name": "df05be51-43eb-4164-b63e-f30275cd1088", 00:33:37.221 "aliases": [ 00:33:37.221 "lvs/lvol" 00:33:37.221 ], 00:33:37.221 "product_name": "Logical Volume", 00:33:37.221 "block_size": 4096, 00:33:37.221 "num_blocks": 38912, 00:33:37.221 "uuid": "df05be51-43eb-4164-b63e-f30275cd1088", 00:33:37.221 "assigned_rate_limits": { 00:33:37.221 "rw_ios_per_sec": 0, 00:33:37.221 "rw_mbytes_per_sec": 0, 00:33:37.221 "r_mbytes_per_sec": 0, 00:33:37.221 "w_mbytes_per_sec": 0 00:33:37.221 }, 00:33:37.221 "claimed": false, 00:33:37.221 "zoned": false, 00:33:37.221 "supported_io_types": { 00:33:37.221 "read": true, 00:33:37.221 "write": true, 00:33:37.221 "unmap": true, 00:33:37.221 "flush": false, 00:33:37.221 "reset": true, 00:33:37.221 "nvme_admin": false, 00:33:37.221 "nvme_io": false, 00:33:37.221 "nvme_io_md": false, 00:33:37.221 "write_zeroes": true, 00:33:37.221 "zcopy": false, 00:33:37.221 "get_zone_info": false, 00:33:37.221 "zone_management": false, 00:33:37.221 "zone_append": false, 00:33:37.221 "compare": false, 00:33:37.221 "compare_and_write": false, 00:33:37.221 "abort": false, 00:33:37.221 "seek_hole": true, 00:33:37.221 "seek_data": true, 00:33:37.221 "copy": false, 00:33:37.221 "nvme_iov_md": false 00:33:37.221 }, 00:33:37.221 "driver_specific": { 00:33:37.221 "lvol": { 00:33:37.221 "lvol_store_uuid": "8e8d735e-26e6-45fa-b87c-9015ab927013", 00:33:37.221 "base_bdev": "aio_bdev", 00:33:37.221 "thin_provision": false, 00:33:37.221 "num_allocated_clusters": 38, 00:33:37.221 "snapshot": false, 00:33:37.221 "clone": false, 00:33:37.221 "esnap_clone": false 00:33:37.221 } 00:33:37.221 } 00:33:37.221 } 00:33:37.221 ] 00:33:37.221 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:37.221 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e8d735e-26e6-45fa-b87c-9015ab927013 00:33:37.221 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:33:37.482 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:33:37.482 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e8d735e-26e6-45fa-b87c-9015ab927013 00:33:37.482 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:33:37.743 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:33:37.743 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:37.743 [2024-11-20 16:28:13.589787] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:37.743 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e8d735e-26e6-45fa-b87c-9015ab927013 00:33:37.743 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:33:37.743 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e8d735e-26e6-45fa-b87c-9015ab927013 00:33:37.743 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:37.743 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:37.743 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:37.743 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:37.743 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:37.743 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:37.743 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:37.743 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:37.743 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e8d735e-26e6-45fa-b87c-9015ab927013 00:33:38.004 request: 00:33:38.004 { 00:33:38.004 "uuid": "8e8d735e-26e6-45fa-b87c-9015ab927013", 00:33:38.004 "method": "bdev_lvol_get_lvstores", 00:33:38.004 "req_id": 1 00:33:38.004 } 00:33:38.004 Got JSON-RPC error response 00:33:38.004 response: 00:33:38.004 { 00:33:38.004 "code": -19, 00:33:38.004 "message": "No such device" 00:33:38.004 } 00:33:38.004 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:33:38.004 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:38.004 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:38.004 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:38.004 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:38.265 aio_bdev 00:33:38.265 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev df05be51-43eb-4164-b63e-f30275cd1088 00:33:38.265 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=df05be51-43eb-4164-b63e-f30275cd1088 00:33:38.265 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:38.265 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:38.265 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:38.265 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:38.265 16:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:38.265 16:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b df05be51-43eb-4164-b63e-f30275cd1088 -t 2000 00:33:38.525 [ 00:33:38.525 { 00:33:38.525 "name": "df05be51-43eb-4164-b63e-f30275cd1088", 00:33:38.525 "aliases": [ 00:33:38.525 "lvs/lvol" 00:33:38.525 ], 00:33:38.525 "product_name": "Logical Volume", 00:33:38.525 "block_size": 4096, 00:33:38.525 "num_blocks": 38912, 00:33:38.525 "uuid": "df05be51-43eb-4164-b63e-f30275cd1088", 00:33:38.525 "assigned_rate_limits": { 00:33:38.525 "rw_ios_per_sec": 0, 00:33:38.525 "rw_mbytes_per_sec": 0, 00:33:38.525 "r_mbytes_per_sec": 0, 00:33:38.525 "w_mbytes_per_sec": 0 00:33:38.525 }, 00:33:38.525 "claimed": false, 00:33:38.525 "zoned": false, 00:33:38.525 "supported_io_types": { 00:33:38.525 "read": true, 00:33:38.525 "write": true, 00:33:38.525 "unmap": true, 00:33:38.525 "flush": false, 00:33:38.525 "reset": true, 00:33:38.525 "nvme_admin": false, 00:33:38.525 "nvme_io": false, 00:33:38.525 "nvme_io_md": false, 00:33:38.525 "write_zeroes": true, 00:33:38.525 "zcopy": false, 00:33:38.525 "get_zone_info": false, 00:33:38.525 "zone_management": false, 00:33:38.525 "zone_append": false, 00:33:38.525 "compare": false, 00:33:38.525 "compare_and_write": false, 00:33:38.525 "abort": false, 00:33:38.525 "seek_hole": true, 00:33:38.525 "seek_data": true, 00:33:38.525 "copy": false, 00:33:38.525 "nvme_iov_md": false 00:33:38.525 }, 00:33:38.525 "driver_specific": { 00:33:38.525 "lvol": { 00:33:38.525 "lvol_store_uuid": "8e8d735e-26e6-45fa-b87c-9015ab927013", 00:33:38.525 "base_bdev": "aio_bdev", 00:33:38.525 "thin_provision": false, 00:33:38.525 "num_allocated_clusters": 38, 00:33:38.525 "snapshot": false, 00:33:38.525 "clone": false, 00:33:38.525 "esnap_clone": false 00:33:38.525 } 00:33:38.525 } 00:33:38.525 } 00:33:38.525 ] 00:33:38.525 16:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:38.525 16:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e8d735e-26e6-45fa-b87c-9015ab927013 00:33:38.526 16:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:38.787 16:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:38.788 16:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:38.788 16:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e8d735e-26e6-45fa-b87c-9015ab927013 00:33:38.788 16:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:38.788 16:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete df05be51-43eb-4164-b63e-f30275cd1088 00:33:39.048 16:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8e8d735e-26e6-45fa-b87c-9015ab927013 00:33:39.308 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:39.309 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:39.570 00:33:39.570 real 0m17.133s 00:33:39.570 user 0m34.529s 00:33:39.570 sys 0m3.444s 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:39.570 ************************************ 00:33:39.570 END TEST lvs_grow_dirty 00:33:39.570 ************************************ 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:33:39.570 nvmf_trace.0 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:39.570 rmmod nvme_tcp 00:33:39.570 rmmod nvme_fabrics 00:33:39.570 rmmod nvme_keyring 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1520078 ']' 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1520078 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1520078 ']' 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1520078 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:39.570 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1520078 00:33:39.831 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:39.831 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:39.831 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1520078' 00:33:39.831 killing process with pid 1520078 00:33:39.831 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1520078 00:33:39.831 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1520078 00:33:39.831 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:39.831 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:39.831 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:39.831 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:33:39.831 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:33:39.831 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:39.831 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:33:39.831 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:39.831 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:39.831 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.831 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:39.831 16:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:42.377 16:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:42.377 00:33:42.377 real 0m44.589s 00:33:42.377 user 0m53.198s 00:33:42.377 sys 0m11.075s 00:33:42.377 16:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:42.377 16:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:42.377 ************************************ 00:33:42.377 END TEST nvmf_lvs_grow 00:33:42.377 ************************************ 00:33:42.377 16:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:42.377 16:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:42.377 16:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:42.377 16:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:42.377 ************************************ 00:33:42.377 START TEST nvmf_bdev_io_wait 00:33:42.377 ************************************ 00:33:42.377 16:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:42.377 * Looking for test storage... 00:33:42.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:42.377 16:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:42.378 16:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:33:42.378 16:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:42.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.378 --rc genhtml_branch_coverage=1 00:33:42.378 --rc genhtml_function_coverage=1 00:33:42.378 --rc genhtml_legend=1 00:33:42.378 --rc geninfo_all_blocks=1 00:33:42.378 --rc geninfo_unexecuted_blocks=1 00:33:42.378 00:33:42.378 ' 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:42.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.378 --rc genhtml_branch_coverage=1 00:33:42.378 --rc genhtml_function_coverage=1 00:33:42.378 --rc genhtml_legend=1 00:33:42.378 --rc geninfo_all_blocks=1 00:33:42.378 --rc geninfo_unexecuted_blocks=1 00:33:42.378 00:33:42.378 ' 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:42.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.378 --rc genhtml_branch_coverage=1 00:33:42.378 --rc genhtml_function_coverage=1 00:33:42.378 --rc genhtml_legend=1 00:33:42.378 --rc geninfo_all_blocks=1 00:33:42.378 --rc geninfo_unexecuted_blocks=1 00:33:42.378 00:33:42.378 ' 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:42.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.378 --rc genhtml_branch_coverage=1 00:33:42.378 --rc genhtml_function_coverage=1 00:33:42.378 --rc genhtml_legend=1 00:33:42.378 --rc geninfo_all_blocks=1 00:33:42.378 --rc geninfo_unexecuted_blocks=1 00:33:42.378 00:33:42.378 ' 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:42.378 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:33:42.379 16:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:50.524 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:50.524 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:33:50.524 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:50.524 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:50.525 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:50.525 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:50.525 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:50.525 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:50.525 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:50.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:50.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:33:50.526 00:33:50.526 --- 10.0.0.2 ping statistics --- 00:33:50.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.526 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:50.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:50.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:33:50.526 00:33:50.526 --- 10.0.0.1 ping statistics --- 00:33:50.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.526 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1525068 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1525068 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1525068 ']' 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:50.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:50.526 16:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:50.526 [2024-11-20 16:28:25.671207] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:50.526 [2024-11-20 16:28:25.672344] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:33:50.526 [2024-11-20 16:28:25.672398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:50.526 [2024-11-20 16:28:25.772543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:50.526 [2024-11-20 16:28:25.826656] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:50.526 [2024-11-20 16:28:25.826707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:50.526 [2024-11-20 16:28:25.826716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:50.526 [2024-11-20 16:28:25.826724] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:50.526 [2024-11-20 16:28:25.826730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:50.526 [2024-11-20 16:28:25.829104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:50.526 [2024-11-20 16:28:25.829268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:50.526 [2024-11-20 16:28:25.829315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:50.526 [2024-11-20 16:28:25.829315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:50.526 [2024-11-20 16:28:25.829932] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:50.789 [2024-11-20 16:28:26.618584] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:50.789 [2024-11-20 16:28:26.619572] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:50.789 [2024-11-20 16:28:26.619665] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:50.789 [2024-11-20 16:28:26.619828] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:50.789 [2024-11-20 16:28:26.630133] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:50.789 Malloc0 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:50.789 [2024-11-20 16:28:26.706792] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1525338 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1525341 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:50.789 { 00:33:50.789 "params": { 00:33:50.789 "name": "Nvme$subsystem", 00:33:50.789 "trtype": "$TEST_TRANSPORT", 00:33:50.789 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:50.789 "adrfam": "ipv4", 00:33:50.789 "trsvcid": "$NVMF_PORT", 00:33:50.789 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:50.789 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:50.789 "hdgst": ${hdgst:-false}, 00:33:50.789 "ddgst": ${ddgst:-false} 00:33:50.789 }, 00:33:50.789 "method": "bdev_nvme_attach_controller" 00:33:50.789 } 00:33:50.789 EOF 00:33:50.789 )") 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1525344 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1525348 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:50.789 { 00:33:50.789 "params": { 00:33:50.789 "name": "Nvme$subsystem", 00:33:50.789 "trtype": "$TEST_TRANSPORT", 00:33:50.789 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:50.789 "adrfam": "ipv4", 00:33:50.789 "trsvcid": "$NVMF_PORT", 00:33:50.789 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:50.789 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:50.789 "hdgst": ${hdgst:-false}, 00:33:50.789 "ddgst": ${ddgst:-false} 00:33:50.789 }, 00:33:50.789 "method": "bdev_nvme_attach_controller" 00:33:50.789 } 00:33:50.789 EOF 00:33:50.789 )") 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:50.789 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:50.790 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:50.790 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:50.790 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:50.790 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:50.790 { 00:33:50.790 "params": { 00:33:50.790 "name": "Nvme$subsystem", 00:33:50.790 "trtype": "$TEST_TRANSPORT", 00:33:50.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:50.790 "adrfam": "ipv4", 00:33:50.790 "trsvcid": "$NVMF_PORT", 00:33:50.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:50.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:50.790 "hdgst": ${hdgst:-false}, 00:33:50.790 "ddgst": ${ddgst:-false} 00:33:50.790 }, 00:33:50.790 "method": "bdev_nvme_attach_controller" 00:33:50.790 } 00:33:50.790 EOF 00:33:50.790 )") 00:33:50.790 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:50.790 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:50.790 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:50.790 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:50.790 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:50.790 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:50.790 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:50.790 { 00:33:50.790 "params": { 00:33:50.790 "name": "Nvme$subsystem", 00:33:50.790 "trtype": "$TEST_TRANSPORT", 00:33:50.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:50.790 "adrfam": "ipv4", 00:33:50.790 "trsvcid": "$NVMF_PORT", 00:33:50.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:50.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:50.790 "hdgst": ${hdgst:-false}, 00:33:50.790 "ddgst": ${ddgst:-false} 00:33:50.790 }, 00:33:50.790 "method": "bdev_nvme_attach_controller" 00:33:50.790 } 00:33:50.790 EOF 00:33:50.790 )") 00:33:51.052 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:51.052 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1525338 00:33:51.052 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:51.052 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:51.052 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:51.052 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:51.052 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:51.052 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:51.052 "params": { 00:33:51.052 "name": "Nvme1", 00:33:51.052 "trtype": "tcp", 00:33:51.052 "traddr": "10.0.0.2", 00:33:51.052 "adrfam": "ipv4", 00:33:51.052 "trsvcid": "4420", 00:33:51.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:51.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:51.052 "hdgst": false, 00:33:51.052 "ddgst": false 00:33:51.052 }, 00:33:51.052 "method": "bdev_nvme_attach_controller" 00:33:51.052 }' 00:33:51.052 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:51.052 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:51.052 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:51.052 "params": { 00:33:51.052 "name": "Nvme1", 00:33:51.052 "trtype": "tcp", 00:33:51.052 "traddr": "10.0.0.2", 00:33:51.052 "adrfam": "ipv4", 00:33:51.052 "trsvcid": "4420", 00:33:51.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:51.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:51.052 "hdgst": false, 00:33:51.052 "ddgst": false 00:33:51.052 }, 00:33:51.052 "method": "bdev_nvme_attach_controller" 00:33:51.052 }' 00:33:51.052 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:51.052 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:51.052 "params": { 00:33:51.052 "name": "Nvme1", 00:33:51.052 "trtype": "tcp", 00:33:51.052 "traddr": "10.0.0.2", 00:33:51.052 "adrfam": "ipv4", 00:33:51.052 "trsvcid": "4420", 00:33:51.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:51.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:51.052 "hdgst": false, 00:33:51.052 "ddgst": false 00:33:51.052 }, 00:33:51.052 "method": "bdev_nvme_attach_controller" 00:33:51.052 }' 00:33:51.052 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:51.052 16:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:51.052 "params": { 00:33:51.052 "name": "Nvme1", 00:33:51.052 "trtype": "tcp", 00:33:51.052 "traddr": "10.0.0.2", 00:33:51.052 "adrfam": "ipv4", 00:33:51.052 "trsvcid": "4420", 00:33:51.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:51.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:51.052 "hdgst": false, 00:33:51.052 "ddgst": false 00:33:51.052 }, 00:33:51.052 "method": "bdev_nvme_attach_controller" 00:33:51.052 }' 00:33:51.052 [2024-11-20 16:28:26.766078] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:33:51.052 [2024-11-20 16:28:26.766080] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:33:51.052 [2024-11-20 16:28:26.766151] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 [2024-11-20 16:28:26.766152] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:33:51.053 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:51.053 [2024-11-20 16:28:26.769089] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:33:51.053 [2024-11-20 16:28:26.769089] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:33:51.053 [2024-11-20 16:28:26.769169] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:33:51.053 [2024-11-20 16:28:26.769173] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:51.315 [2024-11-20 16:28:26.996611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.315 [2024-11-20 16:28:27.036570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:51.315 [2024-11-20 16:28:27.087951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.315 [2024-11-20 16:28:27.129507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:51.315 [2024-11-20 16:28:27.154091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.315 [2024-11-20 16:28:27.192757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:51.315 [2024-11-20 16:28:27.225100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.576 [2024-11-20 16:28:27.264859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:51.576 Running I/O for 1 seconds... 00:33:51.576 Running I/O for 1 seconds... 00:33:51.576 Running I/O for 1 seconds... 00:33:51.576 Running I/O for 1 seconds... 00:33:52.518 10872.00 IOPS, 42.47 MiB/s 00:33:52.518 Latency(us) 00:33:52.518 [2024-11-20T15:28:28.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:52.518 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:52.518 Nvme1n1 : 1.01 10927.55 42.69 0.00 0.00 11668.88 2266.45 14308.69 00:33:52.518 [2024-11-20T15:28:28.454Z] =================================================================================================================== 00:33:52.518 [2024-11-20T15:28:28.454Z] Total : 10927.55 42.69 0.00 0.00 11668.88 2266.45 14308.69 00:33:52.518 10946.00 IOPS, 42.76 MiB/s 00:33:52.518 Latency(us) 00:33:52.518 [2024-11-20T15:28:28.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:52.518 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:52.518 Nvme1n1 : 1.01 11020.62 43.05 0.00 0.00 11575.82 2812.59 15400.96 00:33:52.518 [2024-11-20T15:28:28.454Z] =================================================================================================================== 00:33:52.518 [2024-11-20T15:28:28.454Z] Total : 11020.62 43.05 0.00 0.00 11575.82 2812.59 15400.96 00:33:52.780 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1525341 00:33:52.780 9967.00 IOPS, 38.93 MiB/s 00:33:52.780 Latency(us) 00:33:52.780 [2024-11-20T15:28:28.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:52.780 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:52.780 Nvme1n1 : 1.01 10032.13 39.19 0.00 0.00 12714.54 4833.28 19660.80 00:33:52.780 [2024-11-20T15:28:28.716Z] =================================================================================================================== 00:33:52.780 [2024-11-20T15:28:28.716Z] Total : 10032.13 39.19 0.00 0.00 12714.54 4833.28 19660.80 00:33:52.780 177608.00 IOPS, 693.78 MiB/s 00:33:52.780 Latency(us) 00:33:52.780 [2024-11-20T15:28:28.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:52.780 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:52.780 Nvme1n1 : 1.00 177254.40 692.40 0.00 0.00 717.99 320.85 1966.08 00:33:52.780 [2024-11-20T15:28:28.716Z] =================================================================================================================== 00:33:52.780 [2024-11-20T15:28:28.716Z] Total : 177254.40 692.40 0.00 0.00 717.99 320.85 1966.08 00:33:52.780 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1525344 00:33:52.780 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1525348 00:33:52.780 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:52.780 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.780 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:52.780 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.780 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:52.780 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:52.780 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:52.780 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:52.780 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:52.780 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:52.780 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:52.780 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:52.780 rmmod nvme_tcp 00:33:52.780 rmmod nvme_fabrics 00:33:52.780 rmmod nvme_keyring 00:33:52.780 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:52.780 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:52.780 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:52.780 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1525068 ']' 00:33:52.780 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1525068 00:33:52.780 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1525068 ']' 00:33:53.042 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1525068 00:33:53.042 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:33:53.042 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:53.042 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1525068 00:33:53.042 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:53.042 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:53.042 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1525068' 00:33:53.042 killing process with pid 1525068 00:33:53.042 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1525068 00:33:53.042 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1525068 00:33:53.042 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:53.042 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:53.042 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:53.042 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:53.042 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:53.042 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:53.042 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:53.042 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:53.042 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:53.042 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:53.042 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:53.042 16:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:55.654 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:55.654 00:33:55.654 real 0m13.199s 00:33:55.654 user 0m16.071s 00:33:55.654 sys 0m7.776s 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:55.655 ************************************ 00:33:55.655 END TEST nvmf_bdev_io_wait 00:33:55.655 ************************************ 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:55.655 ************************************ 00:33:55.655 START TEST nvmf_queue_depth 00:33:55.655 ************************************ 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:55.655 * Looking for test storage... 00:33:55.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:55.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.655 --rc genhtml_branch_coverage=1 00:33:55.655 --rc genhtml_function_coverage=1 00:33:55.655 --rc genhtml_legend=1 00:33:55.655 --rc geninfo_all_blocks=1 00:33:55.655 --rc geninfo_unexecuted_blocks=1 00:33:55.655 00:33:55.655 ' 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:55.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.655 --rc genhtml_branch_coverage=1 00:33:55.655 --rc genhtml_function_coverage=1 00:33:55.655 --rc genhtml_legend=1 00:33:55.655 --rc geninfo_all_blocks=1 00:33:55.655 --rc geninfo_unexecuted_blocks=1 00:33:55.655 00:33:55.655 ' 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:55.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.655 --rc genhtml_branch_coverage=1 00:33:55.655 --rc genhtml_function_coverage=1 00:33:55.655 --rc genhtml_legend=1 00:33:55.655 --rc geninfo_all_blocks=1 00:33:55.655 --rc geninfo_unexecuted_blocks=1 00:33:55.655 00:33:55.655 ' 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:55.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.655 --rc genhtml_branch_coverage=1 00:33:55.655 --rc genhtml_function_coverage=1 00:33:55.655 --rc genhtml_legend=1 00:33:55.655 --rc geninfo_all_blocks=1 00:33:55.655 --rc geninfo_unexecuted_blocks=1 00:33:55.655 00:33:55.655 ' 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:55.655 16:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:03.799 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:03.799 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:34:03.799 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:03.799 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:03.799 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:03.799 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:03.800 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:03.800 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:03.800 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:03.800 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:03.800 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:03.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:03.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:34:03.801 00:34:03.801 --- 10.0.0.2 ping statistics --- 00:34:03.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:03.801 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:03.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:03.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:34:03.801 00:34:03.801 --- 10.0.0.1 ping statistics --- 00:34:03.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:03.801 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1529799 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1529799 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1529799 ']' 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:03.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:03.801 16:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:03.801 [2024-11-20 16:28:38.883652] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:03.801 [2024-11-20 16:28:38.884763] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:34:03.801 [2024-11-20 16:28:38.884812] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:03.801 [2024-11-20 16:28:38.987709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.801 [2024-11-20 16:28:39.038352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:03.801 [2024-11-20 16:28:39.038398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:03.801 [2024-11-20 16:28:39.038406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:03.801 [2024-11-20 16:28:39.038414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:03.801 [2024-11-20 16:28:39.038420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:03.801 [2024-11-20 16:28:39.039148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:03.801 [2024-11-20 16:28:39.116443] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:03.801 [2024-11-20 16:28:39.116728] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:03.801 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:03.801 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:03.801 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:03.801 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:03.801 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:04.062 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:04.062 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:04.062 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.062 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:04.062 [2024-11-20 16:28:39.752001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:04.063 Malloc0 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:04.063 [2024-11-20 16:28:39.828067] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1530141 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1530141 /var/tmp/bdevperf.sock 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1530141 ']' 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:04.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:04.063 16:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:04.063 [2024-11-20 16:28:39.891620] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:34:04.063 [2024-11-20 16:28:39.891687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1530141 ] 00:34:04.063 [2024-11-20 16:28:39.984251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.324 [2024-11-20 16:28:40.039340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.895 16:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:04.895 16:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:04.896 16:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:04.896 16:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.896 16:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:04.896 NVMe0n1 00:34:04.896 16:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.896 16:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:05.157 Running I/O for 10 seconds... 00:34:07.043 8201.00 IOPS, 32.04 MiB/s [2024-11-20T15:28:43.921Z] 8704.50 IOPS, 34.00 MiB/s [2024-11-20T15:28:45.302Z] 9536.00 IOPS, 37.25 MiB/s [2024-11-20T15:28:46.241Z] 10483.75 IOPS, 40.95 MiB/s [2024-11-20T15:28:47.182Z] 11058.40 IOPS, 43.20 MiB/s [2024-11-20T15:28:48.124Z] 11445.83 IOPS, 44.71 MiB/s [2024-11-20T15:28:49.065Z] 11761.86 IOPS, 45.94 MiB/s [2024-11-20T15:28:50.005Z] 12019.50 IOPS, 46.95 MiB/s [2024-11-20T15:28:50.947Z] 12185.33 IOPS, 47.60 MiB/s [2024-11-20T15:28:51.208Z] 12323.80 IOPS, 48.14 MiB/s 00:34:15.272 Latency(us) 00:34:15.272 [2024-11-20T15:28:51.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:15.272 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:34:15.272 Verification LBA range: start 0x0 length 0x4000 00:34:15.272 NVMe0n1 : 10.05 12361.25 48.29 0.00 0.00 82543.48 12451.84 79080.11 00:34:15.272 [2024-11-20T15:28:51.208Z] =================================================================================================================== 00:34:15.272 [2024-11-20T15:28:51.208Z] Total : 12361.25 48.29 0.00 0.00 82543.48 12451.84 79080.11 00:34:15.272 { 00:34:15.272 "results": [ 00:34:15.272 { 00:34:15.272 "job": "NVMe0n1", 00:34:15.272 "core_mask": "0x1", 00:34:15.272 "workload": "verify", 00:34:15.272 "status": "finished", 00:34:15.272 "verify_range": { 00:34:15.272 "start": 0, 00:34:15.272 "length": 16384 00:34:15.272 }, 00:34:15.272 "queue_depth": 1024, 00:34:15.272 "io_size": 4096, 00:34:15.272 "runtime": 10.047687, 00:34:15.272 "iops": 12361.252893327588, 00:34:15.272 "mibps": 48.28614411456089, 00:34:15.272 "io_failed": 0, 00:34:15.272 "io_timeout": 0, 00:34:15.272 "avg_latency_us": 82543.48228852998, 00:34:15.272 "min_latency_us": 12451.84, 00:34:15.272 "max_latency_us": 79080.10666666667 00:34:15.272 } 00:34:15.272 ], 00:34:15.272 "core_count": 1 00:34:15.272 } 00:34:15.272 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1530141 00:34:15.272 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1530141 ']' 00:34:15.272 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1530141 00:34:15.272 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:15.272 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:15.272 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1530141 00:34:15.272 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:15.272 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:15.272 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1530141' 00:34:15.272 killing process with pid 1530141 00:34:15.272 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1530141 00:34:15.272 Received shutdown signal, test time was about 10.000000 seconds 00:34:15.272 00:34:15.272 Latency(us) 00:34:15.272 [2024-11-20T15:28:51.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:15.272 [2024-11-20T15:28:51.208Z] =================================================================================================================== 00:34:15.272 [2024-11-20T15:28:51.208Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:15.272 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1530141 00:34:15.272 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:34:15.272 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:34:15.272 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:15.272 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:34:15.272 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:15.272 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:34:15.272 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:15.272 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:15.272 rmmod nvme_tcp 00:34:15.272 rmmod nvme_fabrics 00:34:15.533 rmmod nvme_keyring 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1529799 ']' 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1529799 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1529799 ']' 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1529799 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1529799 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1529799' 00:34:15.533 killing process with pid 1529799 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1529799 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1529799 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:15.533 16:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:18.081 00:34:18.081 real 0m22.397s 00:34:18.081 user 0m24.546s 00:34:18.081 sys 0m7.467s 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:18.081 ************************************ 00:34:18.081 END TEST nvmf_queue_depth 00:34:18.081 ************************************ 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:18.081 ************************************ 00:34:18.081 START TEST nvmf_target_multipath 00:34:18.081 ************************************ 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:18.081 * Looking for test storage... 00:34:18.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:34:18.081 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:18.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.082 --rc genhtml_branch_coverage=1 00:34:18.082 --rc genhtml_function_coverage=1 00:34:18.082 --rc genhtml_legend=1 00:34:18.082 --rc geninfo_all_blocks=1 00:34:18.082 --rc geninfo_unexecuted_blocks=1 00:34:18.082 00:34:18.082 ' 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:18.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.082 --rc genhtml_branch_coverage=1 00:34:18.082 --rc genhtml_function_coverage=1 00:34:18.082 --rc genhtml_legend=1 00:34:18.082 --rc geninfo_all_blocks=1 00:34:18.082 --rc geninfo_unexecuted_blocks=1 00:34:18.082 00:34:18.082 ' 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:18.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.082 --rc genhtml_branch_coverage=1 00:34:18.082 --rc genhtml_function_coverage=1 00:34:18.082 --rc genhtml_legend=1 00:34:18.082 --rc geninfo_all_blocks=1 00:34:18.082 --rc geninfo_unexecuted_blocks=1 00:34:18.082 00:34:18.082 ' 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:18.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.082 --rc genhtml_branch_coverage=1 00:34:18.082 --rc genhtml_function_coverage=1 00:34:18.082 --rc genhtml_legend=1 00:34:18.082 --rc geninfo_all_blocks=1 00:34:18.082 --rc geninfo_unexecuted_blocks=1 00:34:18.082 00:34:18.082 ' 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:18.082 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.083 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:18.083 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.083 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:18.083 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:18.083 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:34:18.083 16:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:26.230 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:26.230 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:26.230 16:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:26.230 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.230 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:26.230 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:26.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:26.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:34:26.231 00:34:26.231 --- 10.0.0.2 ping statistics --- 00:34:26.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.231 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:26.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:26.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:34:26.231 00:34:26.231 --- 10.0.0.1 ping statistics --- 00:34:26.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.231 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:34:26.231 only one NIC for nvmf test 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:26.231 rmmod nvme_tcp 00:34:26.231 rmmod nvme_fabrics 00:34:26.231 rmmod nvme_keyring 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:26.231 16:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:27.710 00:34:27.710 real 0m9.978s 00:34:27.710 user 0m2.149s 00:34:27.710 sys 0m5.772s 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:27.710 ************************************ 00:34:27.710 END TEST nvmf_target_multipath 00:34:27.710 ************************************ 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:27.710 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:27.973 ************************************ 00:34:27.973 START TEST nvmf_zcopy 00:34:27.973 ************************************ 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:27.973 * Looking for test storage... 00:34:27.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:27.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.973 --rc genhtml_branch_coverage=1 00:34:27.973 --rc genhtml_function_coverage=1 00:34:27.973 --rc genhtml_legend=1 00:34:27.973 --rc geninfo_all_blocks=1 00:34:27.973 --rc geninfo_unexecuted_blocks=1 00:34:27.973 00:34:27.973 ' 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:27.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.973 --rc genhtml_branch_coverage=1 00:34:27.973 --rc genhtml_function_coverage=1 00:34:27.973 --rc genhtml_legend=1 00:34:27.973 --rc geninfo_all_blocks=1 00:34:27.973 --rc geninfo_unexecuted_blocks=1 00:34:27.973 00:34:27.973 ' 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:27.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.973 --rc genhtml_branch_coverage=1 00:34:27.973 --rc genhtml_function_coverage=1 00:34:27.973 --rc genhtml_legend=1 00:34:27.973 --rc geninfo_all_blocks=1 00:34:27.973 --rc geninfo_unexecuted_blocks=1 00:34:27.973 00:34:27.973 ' 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:27.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.973 --rc genhtml_branch_coverage=1 00:34:27.973 --rc genhtml_function_coverage=1 00:34:27.973 --rc genhtml_legend=1 00:34:27.973 --rc geninfo_all_blocks=1 00:34:27.973 --rc geninfo_unexecuted_blocks=1 00:34:27.973 00:34:27.973 ' 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:27.973 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:34:27.974 16:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:36.114 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:36.114 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:36.114 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:36.114 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:36.115 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:36.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:36.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:34:36.115 00:34:36.115 --- 10.0.0.2 ping statistics --- 00:34:36.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:36.115 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:36.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:36.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:34:36.115 00:34:36.115 --- 10.0.0.1 ping statistics --- 00:34:36.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:36.115 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1540487 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1540487 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1540487 ']' 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:36.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:36.115 16:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:36.115 [2024-11-20 16:29:11.481075] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:36.115 [2024-11-20 16:29:11.482232] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:34:36.115 [2024-11-20 16:29:11.482284] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:36.115 [2024-11-20 16:29:11.583833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:36.115 [2024-11-20 16:29:11.633823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:36.115 [2024-11-20 16:29:11.633873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:36.115 [2024-11-20 16:29:11.633882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:36.115 [2024-11-20 16:29:11.633889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:36.115 [2024-11-20 16:29:11.633896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:36.115 [2024-11-20 16:29:11.634645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:36.115 [2024-11-20 16:29:11.710462] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:36.115 [2024-11-20 16:29:11.710761] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:36.376 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:36.376 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:34:36.376 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:36.376 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:36.376 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:36.637 [2024-11-20 16:29:12.347528] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:36.637 [2024-11-20 16:29:12.375853] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:36.637 malloc0 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:36.637 { 00:34:36.637 "params": { 00:34:36.637 "name": "Nvme$subsystem", 00:34:36.637 "trtype": "$TEST_TRANSPORT", 00:34:36.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:36.637 "adrfam": "ipv4", 00:34:36.637 "trsvcid": "$NVMF_PORT", 00:34:36.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:36.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:36.637 "hdgst": ${hdgst:-false}, 00:34:36.637 "ddgst": ${ddgst:-false} 00:34:36.637 }, 00:34:36.637 "method": "bdev_nvme_attach_controller" 00:34:36.637 } 00:34:36.637 EOF 00:34:36.637 )") 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:36.637 16:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:36.637 "params": { 00:34:36.637 "name": "Nvme1", 00:34:36.637 "trtype": "tcp", 00:34:36.637 "traddr": "10.0.0.2", 00:34:36.637 "adrfam": "ipv4", 00:34:36.637 "trsvcid": "4420", 00:34:36.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:36.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:36.637 "hdgst": false, 00:34:36.637 "ddgst": false 00:34:36.637 }, 00:34:36.637 "method": "bdev_nvme_attach_controller" 00:34:36.637 }' 00:34:36.637 [2024-11-20 16:29:12.479358] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:34:36.637 [2024-11-20 16:29:12.479426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540752 ] 00:34:36.899 [2024-11-20 16:29:12.570733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:36.899 [2024-11-20 16:29:12.623675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:36.899 Running I/O for 10 seconds... 00:34:38.861 6375.00 IOPS, 49.80 MiB/s [2024-11-20T15:29:16.185Z] 6409.50 IOPS, 50.07 MiB/s [2024-11-20T15:29:17.131Z] 6437.67 IOPS, 50.29 MiB/s [2024-11-20T15:29:18.077Z] 6830.50 IOPS, 53.36 MiB/s [2024-11-20T15:29:19.018Z] 7398.80 IOPS, 57.80 MiB/s [2024-11-20T15:29:19.960Z] 7772.67 IOPS, 60.72 MiB/s [2024-11-20T15:29:20.900Z] 8039.29 IOPS, 62.81 MiB/s [2024-11-20T15:29:21.841Z] 8239.38 IOPS, 64.37 MiB/s [2024-11-20T15:29:23.224Z] 8394.89 IOPS, 65.59 MiB/s [2024-11-20T15:29:23.224Z] 8524.20 IOPS, 66.60 MiB/s 00:34:47.288 Latency(us) 00:34:47.288 [2024-11-20T15:29:23.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:47.288 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:34:47.288 Verification LBA range: start 0x0 length 0x1000 00:34:47.288 Nvme1n1 : 10.01 8525.61 66.61 0.00 0.00 14967.35 703.15 27852.80 00:34:47.288 [2024-11-20T15:29:23.224Z] =================================================================================================================== 00:34:47.288 [2024-11-20T15:29:23.224Z] Total : 8525.61 66.61 0.00 0.00 14967.35 703.15 27852.80 00:34:47.288 16:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1542623 00:34:47.288 16:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:34:47.288 16:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:47.288 16:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:34:47.288 16:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:34:47.288 16:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:47.288 16:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:47.288 16:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:47.288 16:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:47.288 { 00:34:47.288 "params": { 00:34:47.288 "name": "Nvme$subsystem", 00:34:47.288 "trtype": "$TEST_TRANSPORT", 00:34:47.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:47.288 "adrfam": "ipv4", 00:34:47.288 "trsvcid": "$NVMF_PORT", 00:34:47.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:47.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:47.288 "hdgst": ${hdgst:-false}, 00:34:47.288 "ddgst": ${ddgst:-false} 00:34:47.288 }, 00:34:47.288 "method": "bdev_nvme_attach_controller" 00:34:47.288 } 00:34:47.288 EOF 00:34:47.288 )") 00:34:47.288 16:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:47.288 [2024-11-20 16:29:22.927055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.288 [2024-11-20 16:29:22.927081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.288 16:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:47.288 16:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:47.288 16:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:47.288 "params": { 00:34:47.288 "name": "Nvme1", 00:34:47.288 "trtype": "tcp", 00:34:47.288 "traddr": "10.0.0.2", 00:34:47.288 "adrfam": "ipv4", 00:34:47.288 "trsvcid": "4420", 00:34:47.288 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:47.288 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:47.288 "hdgst": false, 00:34:47.288 "ddgst": false 00:34:47.288 }, 00:34:47.288 "method": "bdev_nvme_attach_controller" 00:34:47.288 }' 00:34:47.288 [2024-11-20 16:29:22.939022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.288 [2024-11-20 16:29:22.939031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.288 [2024-11-20 16:29:22.951020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.288 [2024-11-20 16:29:22.951028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.288 [2024-11-20 16:29:22.963019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.288 [2024-11-20 16:29:22.963027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.288 [2024-11-20 16:29:22.968760] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:34:47.288 [2024-11-20 16:29:22.968809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1542623 ] 00:34:47.288 [2024-11-20 16:29:22.975020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.288 [2024-11-20 16:29:22.975029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.288 [2024-11-20 16:29:22.987019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.288 [2024-11-20 16:29:22.987028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.288 [2024-11-20 16:29:22.999019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.288 [2024-11-20 16:29:22.999027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.288 [2024-11-20 16:29:23.011019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.288 [2024-11-20 16:29:23.011027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.288 [2024-11-20 16:29:23.023020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.288 [2024-11-20 16:29:23.023028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.288 [2024-11-20 16:29:23.035019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.288 [2024-11-20 16:29:23.035032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.288 [2024-11-20 16:29:23.047019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.288 [2024-11-20 16:29:23.047027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.288 [2024-11-20 16:29:23.050438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.288 [2024-11-20 16:29:23.059021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.288 [2024-11-20 16:29:23.059031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.288 [2024-11-20 16:29:23.071020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.288 [2024-11-20 16:29:23.071030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.288 [2024-11-20 16:29:23.079904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.288 [2024-11-20 16:29:23.083020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.288 [2024-11-20 16:29:23.083029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.288 [2024-11-20 16:29:23.095027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.288 [2024-11-20 16:29:23.095038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.288 [2024-11-20 16:29:23.107025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.288 [2024-11-20 16:29:23.107036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.288 [2024-11-20 16:29:23.119022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.289 [2024-11-20 16:29:23.119034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.289 [2024-11-20 16:29:23.131021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.289 [2024-11-20 16:29:23.131031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.289 [2024-11-20 16:29:23.143114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.289 [2024-11-20 16:29:23.143128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.289 [2024-11-20 16:29:23.155024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.289 [2024-11-20 16:29:23.155036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.289 [2024-11-20 16:29:23.167021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.289 [2024-11-20 16:29:23.167032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.289 [2024-11-20 16:29:23.179023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.289 [2024-11-20 16:29:23.179035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.289 [2024-11-20 16:29:23.191019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.289 [2024-11-20 16:29:23.191028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.289 [2024-11-20 16:29:23.203019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.289 [2024-11-20 16:29:23.203027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.289 [2024-11-20 16:29:23.215019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.289 [2024-11-20 16:29:23.215028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.549 [2024-11-20 16:29:23.227020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.549 [2024-11-20 16:29:23.227032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.549 [2024-11-20 16:29:23.239019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.549 [2024-11-20 16:29:23.239027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.549 [2024-11-20 16:29:23.251019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.549 [2024-11-20 16:29:23.251032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.549 [2024-11-20 16:29:23.263019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.549 [2024-11-20 16:29:23.263027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.549 [2024-11-20 16:29:23.275020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.549 [2024-11-20 16:29:23.275030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.549 [2024-11-20 16:29:23.287019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.549 [2024-11-20 16:29:23.287028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.549 [2024-11-20 16:29:23.299020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.549 [2024-11-20 16:29:23.299028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.549 [2024-11-20 16:29:23.311018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.549 [2024-11-20 16:29:23.311027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.549 [2024-11-20 16:29:23.323019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.549 [2024-11-20 16:29:23.323029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.549 [2024-11-20 16:29:23.335019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.549 [2024-11-20 16:29:23.335027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.549 [2024-11-20 16:29:23.347019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.549 [2024-11-20 16:29:23.347026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.549 [2024-11-20 16:29:23.359019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.549 [2024-11-20 16:29:23.359028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.549 [2024-11-20 16:29:23.371025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.549 [2024-11-20 16:29:23.371040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.549 Running I/O for 5 seconds... 00:34:47.549 [2024-11-20 16:29:23.387040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.549 [2024-11-20 16:29:23.387057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.549 [2024-11-20 16:29:23.400019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.549 [2024-11-20 16:29:23.400035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.549 [2024-11-20 16:29:23.413886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.549 [2024-11-20 16:29:23.413903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.549 [2024-11-20 16:29:23.427122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.549 [2024-11-20 16:29:23.427138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.549 [2024-11-20 16:29:23.439781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.549 [2024-11-20 16:29:23.439796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.549 [2024-11-20 16:29:23.454547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.549 [2024-11-20 16:29:23.454563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.549 [2024-11-20 16:29:23.467724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.549 [2024-11-20 16:29:23.467739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.549 [2024-11-20 16:29:23.482246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.549 [2024-11-20 16:29:23.482262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.809 [2024-11-20 16:29:23.495380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.809 [2024-11-20 16:29:23.495399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.809 [2024-11-20 16:29:23.510067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.809 [2024-11-20 16:29:23.510083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.809 [2024-11-20 16:29:23.523183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.809 [2024-11-20 16:29:23.523198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.809 [2024-11-20 16:29:23.535730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.810 [2024-11-20 16:29:23.535746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.810 [2024-11-20 16:29:23.549741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.810 [2024-11-20 16:29:23.549756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.810 [2024-11-20 16:29:23.562831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.810 [2024-11-20 16:29:23.562846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.810 [2024-11-20 16:29:23.575858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.810 [2024-11-20 16:29:23.575873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.810 [2024-11-20 16:29:23.589927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.810 [2024-11-20 16:29:23.589943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.810 [2024-11-20 16:29:23.603028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.810 [2024-11-20 16:29:23.603043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.810 [2024-11-20 16:29:23.615964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.810 [2024-11-20 16:29:23.615979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.810 [2024-11-20 16:29:23.629729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.810 [2024-11-20 16:29:23.629744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.810 [2024-11-20 16:29:23.642686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.810 [2024-11-20 16:29:23.642702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.810 [2024-11-20 16:29:23.655612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.810 [2024-11-20 16:29:23.655626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.810 [2024-11-20 16:29:23.669887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.810 [2024-11-20 16:29:23.669902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.810 [2024-11-20 16:29:23.682682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.810 [2024-11-20 16:29:23.682697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.810 [2024-11-20 16:29:23.695686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.810 [2024-11-20 16:29:23.695701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.810 [2024-11-20 16:29:23.710987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.810 [2024-11-20 16:29:23.711002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.810 [2024-11-20 16:29:23.723837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.810 [2024-11-20 16:29:23.723852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.810 [2024-11-20 16:29:23.738255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.810 [2024-11-20 16:29:23.738270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.070 [2024-11-20 16:29:23.751084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.070 [2024-11-20 16:29:23.751103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.070 [2024-11-20 16:29:23.764222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.070 [2024-11-20 16:29:23.764237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.070 [2024-11-20 16:29:23.778565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.070 [2024-11-20 16:29:23.778580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.070 [2024-11-20 16:29:23.791755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.070 [2024-11-20 16:29:23.791769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.070 [2024-11-20 16:29:23.806360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.070 [2024-11-20 16:29:23.806375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.070 [2024-11-20 16:29:23.819504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.070 [2024-11-20 16:29:23.819519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.070 [2024-11-20 16:29:23.834387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.070 [2024-11-20 16:29:23.834402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.070 [2024-11-20 16:29:23.847397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.070 [2024-11-20 16:29:23.847411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.070 [2024-11-20 16:29:23.862046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.071 [2024-11-20 16:29:23.862061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.071 [2024-11-20 16:29:23.874766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.071 [2024-11-20 16:29:23.874781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.071 [2024-11-20 16:29:23.887419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.071 [2024-11-20 16:29:23.887433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.071 [2024-11-20 16:29:23.902041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.071 [2024-11-20 16:29:23.902056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.071 [2024-11-20 16:29:23.915181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.071 [2024-11-20 16:29:23.915196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.071 [2024-11-20 16:29:23.927681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.071 [2024-11-20 16:29:23.927695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.071 [2024-11-20 16:29:23.942438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.071 [2024-11-20 16:29:23.942453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.071 [2024-11-20 16:29:23.955422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.071 [2024-11-20 16:29:23.955437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.071 [2024-11-20 16:29:23.970120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.071 [2024-11-20 16:29:23.970135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.071 [2024-11-20 16:29:23.983324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.071 [2024-11-20 16:29:23.983338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.071 [2024-11-20 16:29:23.998097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.071 [2024-11-20 16:29:23.998112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.331 [2024-11-20 16:29:24.011186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.331 [2024-11-20 16:29:24.011202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.331 [2024-11-20 16:29:24.023899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.331 [2024-11-20 16:29:24.023914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.331 [2024-11-20 16:29:24.038205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.331 [2024-11-20 16:29:24.038220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.331 [2024-11-20 16:29:24.051287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.331 [2024-11-20 16:29:24.051302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.331 [2024-11-20 16:29:24.064179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.331 [2024-11-20 16:29:24.064193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.331 [2024-11-20 16:29:24.078326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.331 [2024-11-20 16:29:24.078341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.331 [2024-11-20 16:29:24.091312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.331 [2024-11-20 16:29:24.091327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.331 [2024-11-20 16:29:24.106484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.331 [2024-11-20 16:29:24.106500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.331 [2024-11-20 16:29:24.119525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.331 [2024-11-20 16:29:24.119540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.331 [2024-11-20 16:29:24.133659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.331 [2024-11-20 16:29:24.133674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.331 [2024-11-20 16:29:24.146690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.331 [2024-11-20 16:29:24.146706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.331 [2024-11-20 16:29:24.159567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.331 [2024-11-20 16:29:24.159581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.331 [2024-11-20 16:29:24.173866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.331 [2024-11-20 16:29:24.173881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.331 [2024-11-20 16:29:24.186895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.331 [2024-11-20 16:29:24.186911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.331 [2024-11-20 16:29:24.199815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.331 [2024-11-20 16:29:24.199830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.331 [2024-11-20 16:29:24.214082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.331 [2024-11-20 16:29:24.214097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.331 [2024-11-20 16:29:24.227108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.331 [2024-11-20 16:29:24.227123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.331 [2024-11-20 16:29:24.240150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.331 [2024-11-20 16:29:24.240169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.331 [2024-11-20 16:29:24.254327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.331 [2024-11-20 16:29:24.254342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.591 [2024-11-20 16:29:24.267059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.591 [2024-11-20 16:29:24.267075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.591 [2024-11-20 16:29:24.279904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.591 [2024-11-20 16:29:24.279919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.591 [2024-11-20 16:29:24.294963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.591 [2024-11-20 16:29:24.294979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.591 [2024-11-20 16:29:24.308175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.591 [2024-11-20 16:29:24.308190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.591 [2024-11-20 16:29:24.322443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.591 [2024-11-20 16:29:24.322459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.591 [2024-11-20 16:29:24.335495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.591 [2024-11-20 16:29:24.335510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.591 [2024-11-20 16:29:24.350235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.591 [2024-11-20 16:29:24.350250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.591 [2024-11-20 16:29:24.363416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.591 [2024-11-20 16:29:24.363430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.591 [2024-11-20 16:29:24.378415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.591 [2024-11-20 16:29:24.378430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.591 19009.00 IOPS, 148.51 MiB/s [2024-11-20T15:29:24.527Z] [2024-11-20 16:29:24.392012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.591 [2024-11-20 16:29:24.392027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.591 [2024-11-20 16:29:24.406046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.591 [2024-11-20 16:29:24.406061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.591 [2024-11-20 16:29:24.419359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.591 [2024-11-20 16:29:24.419373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.591 [2024-11-20 16:29:24.434026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.591 [2024-11-20 16:29:24.434042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.591 [2024-11-20 16:29:24.447311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.591 [2024-11-20 16:29:24.447326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.591 [2024-11-20 16:29:24.462377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.591 [2024-11-20 16:29:24.462392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.591 [2024-11-20 16:29:24.475489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.591 [2024-11-20 16:29:24.475504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.592 [2024-11-20 16:29:24.490562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.592 [2024-11-20 16:29:24.490578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.592 [2024-11-20 16:29:24.503275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.592 [2024-11-20 16:29:24.503290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.592 [2024-11-20 16:29:24.515758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.592 [2024-11-20 16:29:24.515777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.852 [2024-11-20 16:29:24.530360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.852 [2024-11-20 16:29:24.530377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.852 [2024-11-20 16:29:24.543473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.852 [2024-11-20 16:29:24.543488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.852 [2024-11-20 16:29:24.558052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.852 [2024-11-20 16:29:24.558067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.852 [2024-11-20 16:29:24.571060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.852 [2024-11-20 16:29:24.571075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.852 [2024-11-20 16:29:24.584556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.852 [2024-11-20 16:29:24.584572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.852 [2024-11-20 16:29:24.598483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.852 [2024-11-20 16:29:24.598498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.852 [2024-11-20 16:29:24.612172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.852 [2024-11-20 16:29:24.612188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.852 [2024-11-20 16:29:24.626099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.852 [2024-11-20 16:29:24.626115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.852 [2024-11-20 16:29:24.639379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.852 [2024-11-20 16:29:24.639394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.852 [2024-11-20 16:29:24.654054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.852 [2024-11-20 16:29:24.654070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.852 [2024-11-20 16:29:24.666768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.852 [2024-11-20 16:29:24.666783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.852 [2024-11-20 16:29:24.679907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.852 [2024-11-20 16:29:24.679922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.852 [2024-11-20 16:29:24.693840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.852 [2024-11-20 16:29:24.693856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.852 [2024-11-20 16:29:24.706823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.852 [2024-11-20 16:29:24.706838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.852 [2024-11-20 16:29:24.720263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.852 [2024-11-20 16:29:24.720278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.852 [2024-11-20 16:29:24.734018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.852 [2024-11-20 16:29:24.734033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.852 [2024-11-20 16:29:24.747356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.852 [2024-11-20 16:29:24.747371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.852 [2024-11-20 16:29:24.762054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.852 [2024-11-20 16:29:24.762069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.852 [2024-11-20 16:29:24.775017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.852 [2024-11-20 16:29:24.775037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.112 [2024-11-20 16:29:24.788284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.112 [2024-11-20 16:29:24.788299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.112 [2024-11-20 16:29:24.802086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.112 [2024-11-20 16:29:24.802102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.112 [2024-11-20 16:29:24.815200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.112 [2024-11-20 16:29:24.815215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.112 [2024-11-20 16:29:24.827890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.112 [2024-11-20 16:29:24.827905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.112 [2024-11-20 16:29:24.841879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.112 [2024-11-20 16:29:24.841895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.112 [2024-11-20 16:29:24.854915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.112 [2024-11-20 16:29:24.854931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.112 [2024-11-20 16:29:24.868047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.112 [2024-11-20 16:29:24.868062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.112 [2024-11-20 16:29:24.882545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.112 [2024-11-20 16:29:24.882560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.112 [2024-11-20 16:29:24.895692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.112 [2024-11-20 16:29:24.895707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.112 [2024-11-20 16:29:24.910239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.112 [2024-11-20 16:29:24.910254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.112 [2024-11-20 16:29:24.923246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.112 [2024-11-20 16:29:24.923262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.112 [2024-11-20 16:29:24.936132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.112 [2024-11-20 16:29:24.936148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.112 [2024-11-20 16:29:24.950447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.112 [2024-11-20 16:29:24.950463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.112 [2024-11-20 16:29:24.963667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.112 [2024-11-20 16:29:24.963682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.112 [2024-11-20 16:29:24.978255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.112 [2024-11-20 16:29:24.978270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.112 [2024-11-20 16:29:24.991669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.112 [2024-11-20 16:29:24.991684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.112 [2024-11-20 16:29:25.005995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.112 [2024-11-20 16:29:25.006010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.112 [2024-11-20 16:29:25.018904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.112 [2024-11-20 16:29:25.018920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.112 [2024-11-20 16:29:25.031558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.112 [2024-11-20 16:29:25.031577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.372 [2024-11-20 16:29:25.046427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.372 [2024-11-20 16:29:25.046444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.372 [2024-11-20 16:29:25.059462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.372 [2024-11-20 16:29:25.059476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.372 [2024-11-20 16:29:25.074492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.372 [2024-11-20 16:29:25.074508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.372 [2024-11-20 16:29:25.087688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.372 [2024-11-20 16:29:25.087703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.372 [2024-11-20 16:29:25.102313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.372 [2024-11-20 16:29:25.102329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.372 [2024-11-20 16:29:25.115352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.372 [2024-11-20 16:29:25.115367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.372 [2024-11-20 16:29:25.130169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.372 [2024-11-20 16:29:25.130185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.372 [2024-11-20 16:29:25.143337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.372 [2024-11-20 16:29:25.143352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.372 [2024-11-20 16:29:25.157901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.372 [2024-11-20 16:29:25.157917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.372 [2024-11-20 16:29:25.170893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.372 [2024-11-20 16:29:25.170909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.372 [2024-11-20 16:29:25.184413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.373 [2024-11-20 16:29:25.184428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.373 [2024-11-20 16:29:25.197970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.373 [2024-11-20 16:29:25.197986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.373 [2024-11-20 16:29:25.211118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.373 [2024-11-20 16:29:25.211134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.373 [2024-11-20 16:29:25.223809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.373 [2024-11-20 16:29:25.223824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.373 [2024-11-20 16:29:25.238241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.373 [2024-11-20 16:29:25.238256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.373 [2024-11-20 16:29:25.250958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.373 [2024-11-20 16:29:25.250974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.373 [2024-11-20 16:29:25.263783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.373 [2024-11-20 16:29:25.263797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.373 [2024-11-20 16:29:25.278399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.373 [2024-11-20 16:29:25.278414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.373 [2024-11-20 16:29:25.291536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.373 [2024-11-20 16:29:25.291554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.373 [2024-11-20 16:29:25.305663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.373 [2024-11-20 16:29:25.305678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.633 [2024-11-20 16:29:25.318673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.633 [2024-11-20 16:29:25.318688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.633 [2024-11-20 16:29:25.331147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.633 [2024-11-20 16:29:25.331166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.633 [2024-11-20 16:29:25.344004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.633 [2024-11-20 16:29:25.344019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.633 [2024-11-20 16:29:25.357992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.633 [2024-11-20 16:29:25.358009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.633 [2024-11-20 16:29:25.371168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.633 [2024-11-20 16:29:25.371183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.633 [2024-11-20 16:29:25.383921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.633 [2024-11-20 16:29:25.383936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.633 19049.00 IOPS, 148.82 MiB/s [2024-11-20T15:29:25.569Z] [2024-11-20 16:29:25.398560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.633 [2024-11-20 16:29:25.398575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.633 [2024-11-20 16:29:25.411768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.633 [2024-11-20 16:29:25.411783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.633 [2024-11-20 16:29:25.426103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.633 [2024-11-20 16:29:25.426118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.633 [2024-11-20 16:29:25.439231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.633 [2024-11-20 16:29:25.439247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.633 [2024-11-20 16:29:25.451877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.633 [2024-11-20 16:29:25.451892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.633 [2024-11-20 16:29:25.466442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.633 [2024-11-20 16:29:25.466457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.633 [2024-11-20 16:29:25.479210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.633 [2024-11-20 16:29:25.479225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.633 [2024-11-20 16:29:25.491020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.633 [2024-11-20 16:29:25.491035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.633 [2024-11-20 16:29:25.504101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.633 [2024-11-20 16:29:25.504116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.633 [2024-11-20 16:29:25.517884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.633 [2024-11-20 16:29:25.517899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.633 [2024-11-20 16:29:25.530936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.633 [2024-11-20 16:29:25.530951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.633 [2024-11-20 16:29:25.543835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.633 [2024-11-20 16:29:25.543850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.633 [2024-11-20 16:29:25.558417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.633 [2024-11-20 16:29:25.558432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.893 [2024-11-20 16:29:25.572018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.893 [2024-11-20 16:29:25.572033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.893 [2024-11-20 16:29:25.586170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.893 [2024-11-20 16:29:25.586185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.893 [2024-11-20 16:29:25.599264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.893 [2024-11-20 16:29:25.599279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.893 [2024-11-20 16:29:25.611748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.893 [2024-11-20 16:29:25.611762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.893 [2024-11-20 16:29:25.626627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.893 [2024-11-20 16:29:25.626641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.893 [2024-11-20 16:29:25.639716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.893 [2024-11-20 16:29:25.639731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.893 [2024-11-20 16:29:25.654167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.893 [2024-11-20 16:29:25.654182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.893 [2024-11-20 16:29:25.667595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.893 [2024-11-20 16:29:25.667610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.894 [2024-11-20 16:29:25.682368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.894 [2024-11-20 16:29:25.682383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.894 [2024-11-20 16:29:25.695168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.894 [2024-11-20 16:29:25.695183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.894 [2024-11-20 16:29:25.708286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.894 [2024-11-20 16:29:25.708302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.894 [2024-11-20 16:29:25.722485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.894 [2024-11-20 16:29:25.722501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.894 [2024-11-20 16:29:25.735448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.894 [2024-11-20 16:29:25.735463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.894 [2024-11-20 16:29:25.750402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.894 [2024-11-20 16:29:25.750417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.894 [2024-11-20 16:29:25.763619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.894 [2024-11-20 16:29:25.763634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.894 [2024-11-20 16:29:25.778077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.894 [2024-11-20 16:29:25.778092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.894 [2024-11-20 16:29:25.790729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.894 [2024-11-20 16:29:25.790744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.894 [2024-11-20 16:29:25.803992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.894 [2024-11-20 16:29:25.804007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.894 [2024-11-20 16:29:25.817742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.894 [2024-11-20 16:29:25.817757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.154 [2024-11-20 16:29:25.830983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.154 [2024-11-20 16:29:25.830998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.154 [2024-11-20 16:29:25.843850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.154 [2024-11-20 16:29:25.843864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.154 [2024-11-20 16:29:25.858238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.154 [2024-11-20 16:29:25.858253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.154 [2024-11-20 16:29:25.871247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.154 [2024-11-20 16:29:25.871262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.154 [2024-11-20 16:29:25.883909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.154 [2024-11-20 16:29:25.883924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.154 [2024-11-20 16:29:25.898067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.154 [2024-11-20 16:29:25.898082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.154 [2024-11-20 16:29:25.911391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.154 [2024-11-20 16:29:25.911406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.154 [2024-11-20 16:29:25.926682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.154 [2024-11-20 16:29:25.926697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.154 [2024-11-20 16:29:25.939839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.154 [2024-11-20 16:29:25.939853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.154 [2024-11-20 16:29:25.954156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.154 [2024-11-20 16:29:25.954176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.154 [2024-11-20 16:29:25.967248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.154 [2024-11-20 16:29:25.967263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.154 [2024-11-20 16:29:25.980293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.154 [2024-11-20 16:29:25.980308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.154 [2024-11-20 16:29:25.994414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.154 [2024-11-20 16:29:25.994429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.154 [2024-11-20 16:29:26.007411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.154 [2024-11-20 16:29:26.007425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.154 [2024-11-20 16:29:26.022352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.154 [2024-11-20 16:29:26.022368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.154 [2024-11-20 16:29:26.035426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.154 [2024-11-20 16:29:26.035440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.154 [2024-11-20 16:29:26.050042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.154 [2024-11-20 16:29:26.050056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.154 [2024-11-20 16:29:26.062846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.154 [2024-11-20 16:29:26.062861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.154 [2024-11-20 16:29:26.075997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.154 [2024-11-20 16:29:26.076011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.415 [2024-11-20 16:29:26.090318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.415 [2024-11-20 16:29:26.090333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.415 [2024-11-20 16:29:26.103350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.415 [2024-11-20 16:29:26.103365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.415 [2024-11-20 16:29:26.117542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.415 [2024-11-20 16:29:26.117557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.415 [2024-11-20 16:29:26.130653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.415 [2024-11-20 16:29:26.130668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.415 [2024-11-20 16:29:26.144215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.415 [2024-11-20 16:29:26.144230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.415 [2024-11-20 16:29:26.157975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.415 [2024-11-20 16:29:26.157990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.415 [2024-11-20 16:29:26.170869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.415 [2024-11-20 16:29:26.170884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.415 [2024-11-20 16:29:26.184097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.415 [2024-11-20 16:29:26.184112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.415 [2024-11-20 16:29:26.198196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.415 [2024-11-20 16:29:26.198211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.415 [2024-11-20 16:29:26.211176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.415 [2024-11-20 16:29:26.211191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.415 [2024-11-20 16:29:26.223657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.415 [2024-11-20 16:29:26.223672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.415 [2024-11-20 16:29:26.238316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.415 [2024-11-20 16:29:26.238331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.415 [2024-11-20 16:29:26.251207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.415 [2024-11-20 16:29:26.251222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.415 [2024-11-20 16:29:26.263983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.415 [2024-11-20 16:29:26.263999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.415 [2024-11-20 16:29:26.277917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.415 [2024-11-20 16:29:26.277932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.415 [2024-11-20 16:29:26.290823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.415 [2024-11-20 16:29:26.290838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.415 [2024-11-20 16:29:26.303349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.415 [2024-11-20 16:29:26.303368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.415 [2024-11-20 16:29:26.317647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.415 [2024-11-20 16:29:26.317664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.415 [2024-11-20 16:29:26.330519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.415 [2024-11-20 16:29:26.330535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.415 [2024-11-20 16:29:26.343863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.415 [2024-11-20 16:29:26.343878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.676 [2024-11-20 16:29:26.358337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.676 [2024-11-20 16:29:26.358353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.676 [2024-11-20 16:29:26.371453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.676 [2024-11-20 16:29:26.371468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.676 [2024-11-20 16:29:26.385863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.676 [2024-11-20 16:29:26.385879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.676 19075.67 IOPS, 149.03 MiB/s [2024-11-20T15:29:26.612Z] [2024-11-20 16:29:26.398994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.676 [2024-11-20 16:29:26.399009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.676 [2024-11-20 16:29:26.411896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.676 [2024-11-20 16:29:26.411911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.676 [2024-11-20 16:29:26.426640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.676 [2024-11-20 16:29:26.426655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.676 [2024-11-20 16:29:26.439624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.676 [2024-11-20 16:29:26.439639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.676 [2024-11-20 16:29:26.454570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.676 [2024-11-20 16:29:26.454586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.676 [2024-11-20 16:29:26.467824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.676 [2024-11-20 16:29:26.467839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.676 [2024-11-20 16:29:26.482318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.676 [2024-11-20 16:29:26.482333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.676 [2024-11-20 16:29:26.494967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.676 [2024-11-20 16:29:26.494984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.676 [2024-11-20 16:29:26.508254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.676 [2024-11-20 16:29:26.508269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.676 [2024-11-20 16:29:26.522574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.676 [2024-11-20 16:29:26.522589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.676 [2024-11-20 16:29:26.535878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.676 [2024-11-20 16:29:26.535893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.676 [2024-11-20 16:29:26.550455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.676 [2024-11-20 16:29:26.550470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.676 [2024-11-20 16:29:26.563360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.676 [2024-11-20 16:29:26.563380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.676 [2024-11-20 16:29:26.577630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.676 [2024-11-20 16:29:26.577646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.676 [2024-11-20 16:29:26.590597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.676 [2024-11-20 16:29:26.590612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.676 [2024-11-20 16:29:26.604126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.676 [2024-11-20 16:29:26.604141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.937 [2024-11-20 16:29:26.618094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.937 [2024-11-20 16:29:26.618109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.937 [2024-11-20 16:29:26.631392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.937 [2024-11-20 16:29:26.631407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.937 [2024-11-20 16:29:26.646272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.937 [2024-11-20 16:29:26.646289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.937 [2024-11-20 16:29:26.659105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.937 [2024-11-20 16:29:26.659120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.937 [2024-11-20 16:29:26.672096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.937 [2024-11-20 16:29:26.672111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.937 [2024-11-20 16:29:26.686479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.937 [2024-11-20 16:29:26.686495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.937 [2024-11-20 16:29:26.699357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.937 [2024-11-20 16:29:26.699372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.937 [2024-11-20 16:29:26.714337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.937 [2024-11-20 16:29:26.714352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.937 [2024-11-20 16:29:26.727506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.937 [2024-11-20 16:29:26.727521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.937 [2024-11-20 16:29:26.742152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.937 [2024-11-20 16:29:26.742173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.937 [2024-11-20 16:29:26.755138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.937 [2024-11-20 16:29:26.755154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.937 [2024-11-20 16:29:26.768244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.937 [2024-11-20 16:29:26.768259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.937 [2024-11-20 16:29:26.782727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.937 [2024-11-20 16:29:26.782743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.937 [2024-11-20 16:29:26.795903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.937 [2024-11-20 16:29:26.795918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.937 [2024-11-20 16:29:26.810651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.937 [2024-11-20 16:29:26.810668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.937 [2024-11-20 16:29:26.823774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.937 [2024-11-20 16:29:26.823793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.937 [2024-11-20 16:29:26.838371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.937 [2024-11-20 16:29:26.838387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.937 [2024-11-20 16:29:26.851230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.937 [2024-11-20 16:29:26.851245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.937 [2024-11-20 16:29:26.864207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.937 [2024-11-20 16:29:26.864222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.198 [2024-11-20 16:29:26.878834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.198 [2024-11-20 16:29:26.878850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.198 [2024-11-20 16:29:26.891710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.198 [2024-11-20 16:29:26.891725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.198 [2024-11-20 16:29:26.905903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.198 [2024-11-20 16:29:26.905918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.198 [2024-11-20 16:29:26.918741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.198 [2024-11-20 16:29:26.918757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.198 [2024-11-20 16:29:26.932335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.198 [2024-11-20 16:29:26.932350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.198 [2024-11-20 16:29:26.946341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.198 [2024-11-20 16:29:26.946357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.198 [2024-11-20 16:29:26.959054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.198 [2024-11-20 16:29:26.959069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.198 [2024-11-20 16:29:26.971846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.198 [2024-11-20 16:29:26.971861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.198 [2024-11-20 16:29:26.986538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.198 [2024-11-20 16:29:26.986553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.198 [2024-11-20 16:29:26.999681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.198 [2024-11-20 16:29:26.999695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.198 [2024-11-20 16:29:27.014149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.198 [2024-11-20 16:29:27.014169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.198 [2024-11-20 16:29:27.027161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.198 [2024-11-20 16:29:27.027176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.198 [2024-11-20 16:29:27.040402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.198 [2024-11-20 16:29:27.040418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.198 [2024-11-20 16:29:27.054294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.198 [2024-11-20 16:29:27.054309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.198 [2024-11-20 16:29:27.067128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.198 [2024-11-20 16:29:27.067143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.198 [2024-11-20 16:29:27.079815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.198 [2024-11-20 16:29:27.079829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.198 [2024-11-20 16:29:27.094108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.198 [2024-11-20 16:29:27.094123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.198 [2024-11-20 16:29:27.107252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.198 [2024-11-20 16:29:27.107267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.198 [2024-11-20 16:29:27.120047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.198 [2024-11-20 16:29:27.120061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.459 [2024-11-20 16:29:27.134450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.459 [2024-11-20 16:29:27.134465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.459 [2024-11-20 16:29:27.147319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.459 [2024-11-20 16:29:27.147334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.459 [2024-11-20 16:29:27.162669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.459 [2024-11-20 16:29:27.162685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.459 [2024-11-20 16:29:27.175744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.459 [2024-11-20 16:29:27.175758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.459 [2024-11-20 16:29:27.190090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.459 [2024-11-20 16:29:27.190106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.459 [2024-11-20 16:29:27.203433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.459 [2024-11-20 16:29:27.203447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.459 [2024-11-20 16:29:27.218058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.459 [2024-11-20 16:29:27.218073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.459 [2024-11-20 16:29:27.231519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.459 [2024-11-20 16:29:27.231533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.459 [2024-11-20 16:29:27.246382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.459 [2024-11-20 16:29:27.246397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.459 [2024-11-20 16:29:27.259495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.459 [2024-11-20 16:29:27.259510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.459 [2024-11-20 16:29:27.273853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.459 [2024-11-20 16:29:27.273868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.459 [2024-11-20 16:29:27.286702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.459 [2024-11-20 16:29:27.286717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.459 [2024-11-20 16:29:27.299223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.459 [2024-11-20 16:29:27.299239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.459 [2024-11-20 16:29:27.311928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.459 [2024-11-20 16:29:27.311943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.459 [2024-11-20 16:29:27.326005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.459 [2024-11-20 16:29:27.326020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.459 [2024-11-20 16:29:27.339220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.459 [2024-11-20 16:29:27.339235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.459 [2024-11-20 16:29:27.351889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.460 [2024-11-20 16:29:27.351903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.460 [2024-11-20 16:29:27.366226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.460 [2024-11-20 16:29:27.366241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.460 [2024-11-20 16:29:27.379510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.460 [2024-11-20 16:29:27.379524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.721 19077.25 IOPS, 149.04 MiB/s [2024-11-20T15:29:27.657Z] [2024-11-20 16:29:27.394340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.721 [2024-11-20 16:29:27.394356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.721 [2024-11-20 16:29:27.407288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.721 [2024-11-20 16:29:27.407303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.721 [2024-11-20 16:29:27.419192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.721 [2024-11-20 16:29:27.419208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.721 [2024-11-20 16:29:27.431820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.721 [2024-11-20 16:29:27.431835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.721 [2024-11-20 16:29:27.446015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.721 [2024-11-20 16:29:27.446031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.721 [2024-11-20 16:29:27.459170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.721 [2024-11-20 16:29:27.459185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.721 [2024-11-20 16:29:27.472062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.721 [2024-11-20 16:29:27.472077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.721 [2024-11-20 16:29:27.486463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.721 [2024-11-20 16:29:27.486478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.721 [2024-11-20 16:29:27.499548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.721 [2024-11-20 16:29:27.499562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.721 [2024-11-20 16:29:27.514787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.721 [2024-11-20 16:29:27.514802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.721 [2024-11-20 16:29:27.527940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.721 [2024-11-20 16:29:27.527955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.721 [2024-11-20 16:29:27.542218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.721 [2024-11-20 16:29:27.542233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.721 [2024-11-20 16:29:27.555281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.721 [2024-11-20 16:29:27.555297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.721 [2024-11-20 16:29:27.568040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.721 [2024-11-20 16:29:27.568055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.721 [2024-11-20 16:29:27.582340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.721 [2024-11-20 16:29:27.582356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.721 [2024-11-20 16:29:27.595191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.721 [2024-11-20 16:29:27.595206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.721 [2024-11-20 16:29:27.607682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.721 [2024-11-20 16:29:27.607696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.721 [2024-11-20 16:29:27.622266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.721 [2024-11-20 16:29:27.622280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.721 [2024-11-20 16:29:27.635507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.721 [2024-11-20 16:29:27.635521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.721 [2024-11-20 16:29:27.649405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.721 [2024-11-20 16:29:27.649420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.982 [2024-11-20 16:29:27.662673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.982 [2024-11-20 16:29:27.662688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.982 [2024-11-20 16:29:27.675884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.982 [2024-11-20 16:29:27.675899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.982 [2024-11-20 16:29:27.690255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.982 [2024-11-20 16:29:27.690270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.982 [2024-11-20 16:29:27.703071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.982 [2024-11-20 16:29:27.703086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.982 [2024-11-20 16:29:27.715720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.982 [2024-11-20 16:29:27.715734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.982 [2024-11-20 16:29:27.729996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.982 [2024-11-20 16:29:27.730012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.982 [2024-11-20 16:29:27.742661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.982 [2024-11-20 16:29:27.742676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.982 [2024-11-20 16:29:27.755991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.982 [2024-11-20 16:29:27.756005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.982 [2024-11-20 16:29:27.770494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.982 [2024-11-20 16:29:27.770509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.982 [2024-11-20 16:29:27.783508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.982 [2024-11-20 16:29:27.783523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.982 [2024-11-20 16:29:27.798060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.982 [2024-11-20 16:29:27.798075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.982 [2024-11-20 16:29:27.811297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.982 [2024-11-20 16:29:27.811311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.982 [2024-11-20 16:29:27.826140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.982 [2024-11-20 16:29:27.826155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.982 [2024-11-20 16:29:27.839413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.982 [2024-11-20 16:29:27.839434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.982 [2024-11-20 16:29:27.853952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.982 [2024-11-20 16:29:27.853967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.982 [2024-11-20 16:29:27.867043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.982 [2024-11-20 16:29:27.867058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.982 [2024-11-20 16:29:27.879556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.982 [2024-11-20 16:29:27.879570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.982 [2024-11-20 16:29:27.893851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.982 [2024-11-20 16:29:27.893866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.982 [2024-11-20 16:29:27.906979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.982 [2024-11-20 16:29:27.906994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.244 [2024-11-20 16:29:27.920213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.244 [2024-11-20 16:29:27.920229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.244 [2024-11-20 16:29:27.934541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.244 [2024-11-20 16:29:27.934556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.244 [2024-11-20 16:29:27.947613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.244 [2024-11-20 16:29:27.947627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.244 [2024-11-20 16:29:27.962227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.244 [2024-11-20 16:29:27.962241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.244 [2024-11-20 16:29:27.974891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.244 [2024-11-20 16:29:27.974906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.244 [2024-11-20 16:29:27.987787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.244 [2024-11-20 16:29:27.987802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.244 [2024-11-20 16:29:28.002039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.244 [2024-11-20 16:29:28.002055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.244 [2024-11-20 16:29:28.015335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.244 [2024-11-20 16:29:28.015350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.244 [2024-11-20 16:29:28.030409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.244 [2024-11-20 16:29:28.030425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.244 [2024-11-20 16:29:28.043378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.244 [2024-11-20 16:29:28.043393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.244 [2024-11-20 16:29:28.057852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.244 [2024-11-20 16:29:28.057867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.244 [2024-11-20 16:29:28.070550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.244 [2024-11-20 16:29:28.070565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.244 [2024-11-20 16:29:28.083714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.244 [2024-11-20 16:29:28.083729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.244 [2024-11-20 16:29:28.098070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.244 [2024-11-20 16:29:28.098089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.244 [2024-11-20 16:29:28.111256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.244 [2024-11-20 16:29:28.111272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.244 [2024-11-20 16:29:28.124052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.244 [2024-11-20 16:29:28.124068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.244 [2024-11-20 16:29:28.138826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.244 [2024-11-20 16:29:28.138842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.244 [2024-11-20 16:29:28.151938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.244 [2024-11-20 16:29:28.151953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.244 [2024-11-20 16:29:28.166489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.244 [2024-11-20 16:29:28.166505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.505 [2024-11-20 16:29:28.179389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.505 [2024-11-20 16:29:28.179404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.505 [2024-11-20 16:29:28.194613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.505 [2024-11-20 16:29:28.194629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.505 [2024-11-20 16:29:28.207598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.505 [2024-11-20 16:29:28.207614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.505 [2024-11-20 16:29:28.222145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.505 [2024-11-20 16:29:28.222166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.505 [2024-11-20 16:29:28.235084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.505 [2024-11-20 16:29:28.235100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.505 [2024-11-20 16:29:28.248405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.505 [2024-11-20 16:29:28.248421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.505 [2024-11-20 16:29:28.262587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.505 [2024-11-20 16:29:28.262602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.505 [2024-11-20 16:29:28.275617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.505 [2024-11-20 16:29:28.275632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.505 [2024-11-20 16:29:28.290052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.505 [2024-11-20 16:29:28.290069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.505 [2024-11-20 16:29:28.303234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.505 [2024-11-20 16:29:28.303250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.505 [2024-11-20 16:29:28.316010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.505 [2024-11-20 16:29:28.316025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.505 [2024-11-20 16:29:28.330708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.505 [2024-11-20 16:29:28.330723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.505 [2024-11-20 16:29:28.343742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.505 [2024-11-20 16:29:28.343757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.505 [2024-11-20 16:29:28.358447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.505 [2024-11-20 16:29:28.358467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.505 [2024-11-20 16:29:28.371577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.505 [2024-11-20 16:29:28.371592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.505 [2024-11-20 16:29:28.386573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.505 [2024-11-20 16:29:28.386588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.505 19082.60 IOPS, 149.08 MiB/s [2024-11-20T15:29:28.441Z] [2024-11-20 16:29:28.398261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.505 [2024-11-20 16:29:28.398276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.505 00:34:52.505 Latency(us) 00:34:52.505 [2024-11-20T15:29:28.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:52.505 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:52.505 Nvme1n1 : 5.01 19085.34 149.10 0.00 0.00 6700.21 2757.97 11632.64 00:34:52.505 [2024-11-20T15:29:28.441Z] =================================================================================================================== 00:34:52.505 [2024-11-20T15:29:28.441Z] Total : 19085.34 149.10 0.00 0.00 6700.21 2757.97 11632.64 00:34:52.505 [2024-11-20 16:29:28.407022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.505 [2024-11-20 16:29:28.407036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.505 [2024-11-20 16:29:28.419029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.505 [2024-11-20 16:29:28.419043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.505 [2024-11-20 16:29:28.431025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.505 [2024-11-20 16:29:28.431037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.766 [2024-11-20 16:29:28.443026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.766 [2024-11-20 16:29:28.443039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.766 [2024-11-20 16:29:28.455023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.766 [2024-11-20 16:29:28.455034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.766 [2024-11-20 16:29:28.467021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.766 [2024-11-20 16:29:28.467030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.766 [2024-11-20 16:29:28.479023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.766 [2024-11-20 16:29:28.479032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.766 [2024-11-20 16:29:28.491020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.766 [2024-11-20 16:29:28.491031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.766 [2024-11-20 16:29:28.503019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.766 [2024-11-20 16:29:28.503028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1542623) - No such process 00:34:52.766 16:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1542623 00:34:52.766 16:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:52.766 16:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.766 16:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:52.766 16:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.766 16:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:52.766 16:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.766 16:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:52.766 delay0 00:34:52.766 16:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.766 16:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:52.766 16:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.766 16:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:52.766 16:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.766 16:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:52.766 [2024-11-20 16:29:28.626694] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:59.350 Initializing NVMe Controllers 00:34:59.350 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:59.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:59.350 Initialization complete. Launching workers. 00:34:59.350 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 4868 00:34:59.350 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 5155, failed to submit 33 00:34:59.350 success 4993, unsuccessful 162, failed 0 00:34:59.350 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:59.350 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:59.350 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:59.350 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:59.350 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:59.350 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:59.350 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:59.350 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:59.350 rmmod nvme_tcp 00:34:59.612 rmmod nvme_fabrics 00:34:59.612 rmmod nvme_keyring 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1540487 ']' 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1540487 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1540487 ']' 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1540487 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1540487 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1540487' 00:34:59.612 killing process with pid 1540487 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1540487 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1540487 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:59.612 16:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:02.161 00:35:02.161 real 0m33.949s 00:35:02.161 user 0m43.169s 00:35:02.161 sys 0m12.302s 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:02.161 ************************************ 00:35:02.161 END TEST nvmf_zcopy 00:35:02.161 ************************************ 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:02.161 ************************************ 00:35:02.161 START TEST nvmf_nmic 00:35:02.161 ************************************ 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:02.161 * Looking for test storage... 00:35:02.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:02.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.161 --rc genhtml_branch_coverage=1 00:35:02.161 --rc genhtml_function_coverage=1 00:35:02.161 --rc genhtml_legend=1 00:35:02.161 --rc geninfo_all_blocks=1 00:35:02.161 --rc geninfo_unexecuted_blocks=1 00:35:02.161 00:35:02.161 ' 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:02.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.161 --rc genhtml_branch_coverage=1 00:35:02.161 --rc genhtml_function_coverage=1 00:35:02.161 --rc genhtml_legend=1 00:35:02.161 --rc geninfo_all_blocks=1 00:35:02.161 --rc geninfo_unexecuted_blocks=1 00:35:02.161 00:35:02.161 ' 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:02.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.161 --rc genhtml_branch_coverage=1 00:35:02.161 --rc genhtml_function_coverage=1 00:35:02.161 --rc genhtml_legend=1 00:35:02.161 --rc geninfo_all_blocks=1 00:35:02.161 --rc geninfo_unexecuted_blocks=1 00:35:02.161 00:35:02.161 ' 00:35:02.161 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:02.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.161 --rc genhtml_branch_coverage=1 00:35:02.161 --rc genhtml_function_coverage=1 00:35:02.161 --rc genhtml_legend=1 00:35:02.162 --rc geninfo_all_blocks=1 00:35:02.162 --rc geninfo_unexecuted_blocks=1 00:35:02.162 00:35:02.162 ' 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:35:02.162 16:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:10.322 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:10.322 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:35:10.322 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:10.322 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:10.322 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:10.323 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:10.323 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:10.323 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:10.323 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:10.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:10.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:35:10.323 00:35:10.323 --- 10.0.0.2 ping statistics --- 00:35:10.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:10.323 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:35:10.323 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:10.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:10.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:35:10.323 00:35:10.323 --- 10.0.0.1 ping statistics --- 00:35:10.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:10.324 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:35:10.324 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:10.324 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:35:10.324 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:10.324 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:10.324 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:10.324 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:10.324 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:10.324 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:10.324 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:10.324 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:35:10.324 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:10.324 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:10.324 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:10.324 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1549169 00:35:10.324 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1549169 00:35:10.324 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:10.324 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1549169 ']' 00:35:10.324 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:10.324 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:10.324 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:10.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:10.324 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:10.324 16:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:10.324 [2024-11-20 16:29:45.492412] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:10.324 [2024-11-20 16:29:45.493536] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:35:10.324 [2024-11-20 16:29:45.493592] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:10.324 [2024-11-20 16:29:45.593015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:10.324 [2024-11-20 16:29:45.647238] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:10.324 [2024-11-20 16:29:45.647292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:10.324 [2024-11-20 16:29:45.647301] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:10.324 [2024-11-20 16:29:45.647308] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:10.324 [2024-11-20 16:29:45.647318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:10.324 [2024-11-20 16:29:45.649285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:10.324 [2024-11-20 16:29:45.649555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:10.324 [2024-11-20 16:29:45.649716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:10.324 [2024-11-20 16:29:45.649718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.324 [2024-11-20 16:29:45.726970] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:10.324 [2024-11-20 16:29:45.728087] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:10.324 [2024-11-20 16:29:45.728233] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:10.324 [2024-11-20 16:29:45.728757] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:10.324 [2024-11-20 16:29:45.728789] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:10.586 [2024-11-20 16:29:46.346749] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:10.586 Malloc0 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:10.586 [2024-11-20 16:29:46.434937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:35:10.586 test case1: single bdev can't be used in multiple subsystems 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:10.586 [2024-11-20 16:29:46.470368] bdev.c:8259:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:35:10.586 [2024-11-20 16:29:46.470397] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:35:10.586 [2024-11-20 16:29:46.470406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.586 request: 00:35:10.586 { 00:35:10.586 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:35:10.586 "namespace": { 00:35:10.586 "bdev_name": "Malloc0", 00:35:10.586 "no_auto_visible": false 00:35:10.586 }, 00:35:10.586 "method": "nvmf_subsystem_add_ns", 00:35:10.586 "req_id": 1 00:35:10.586 } 00:35:10.586 Got JSON-RPC error response 00:35:10.586 response: 00:35:10.586 { 00:35:10.586 "code": -32602, 00:35:10.586 "message": "Invalid parameters" 00:35:10.586 } 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:35:10.586 Adding namespace failed - expected result. 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:35:10.586 test case2: host connect to nvmf target in multiple paths 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:10.586 [2024-11-20 16:29:46.482534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.586 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:11.158 16:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:35:11.729 16:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:35:11.729 16:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:35:11.729 16:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:11.729 16:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:11.729 16:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:35:13.640 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:13.640 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:13.640 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:13.640 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:13.640 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:13.640 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:35:13.640 16:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:13.640 [global] 00:35:13.640 thread=1 00:35:13.640 invalidate=1 00:35:13.640 rw=write 00:35:13.640 time_based=1 00:35:13.640 runtime=1 00:35:13.640 ioengine=libaio 00:35:13.640 direct=1 00:35:13.640 bs=4096 00:35:13.640 iodepth=1 00:35:13.640 norandommap=0 00:35:13.640 numjobs=1 00:35:13.640 00:35:13.640 verify_dump=1 00:35:13.640 verify_backlog=512 00:35:13.640 verify_state_save=0 00:35:13.640 do_verify=1 00:35:13.640 verify=crc32c-intel 00:35:13.640 [job0] 00:35:13.640 filename=/dev/nvme0n1 00:35:13.640 Could not set queue depth (nvme0n1) 00:35:13.899 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:13.899 fio-3.35 00:35:13.899 Starting 1 thread 00:35:15.282 00:35:15.282 job0: (groupid=0, jobs=1): err= 0: pid=1550062: Wed Nov 20 16:29:50 2024 00:35:15.282 read: IOPS=368, BW=1475KiB/s (1510kB/s)(1476KiB/1001msec) 00:35:15.282 slat (nsec): min=27383, max=56523, avg=28105.73, stdev=1815.15 00:35:15.282 clat (usec): min=795, max=42021, avg=1853.70, stdev=5914.24 00:35:15.282 lat (usec): min=824, max=42048, avg=1881.81, stdev=5914.24 00:35:15.282 clat percentiles (usec): 00:35:15.282 | 1.00th=[ 848], 5.00th=[ 881], 10.00th=[ 906], 20.00th=[ 938], 00:35:15.282 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 988], 00:35:15.282 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1074], 00:35:15.282 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:15.282 | 99.99th=[42206] 00:35:15.282 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:35:15.282 slat (usec): min=5, max=30287, avg=86.61, stdev=1337.57 00:35:15.282 clat (usec): min=205, max=4257, avg=499.54, stdev=217.52 00:35:15.282 lat (usec): min=214, max=30950, avg=586.16, stdev=1363.29 00:35:15.282 clat percentiles (usec): 00:35:15.282 | 1.00th=[ 217], 5.00th=[ 237], 10.00th=[ 277], 20.00th=[ 343], 00:35:15.282 | 30.00th=[ 424], 40.00th=[ 482], 50.00th=[ 529], 60.00th=[ 553], 00:35:15.282 | 70.00th=[ 570], 80.00th=[ 611], 90.00th=[ 668], 95.00th=[ 701], 00:35:15.282 | 99.00th=[ 758], 99.50th=[ 840], 99.90th=[ 4228], 99.95th=[ 4228], 00:35:15.282 | 99.99th=[ 4228] 00:35:15.282 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:35:15.282 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:15.282 lat (usec) : 250=4.65%, 500=20.09%, 750=32.69%, 1000=29.85% 00:35:15.282 lat (msec) : 2=11.58%, 4=0.11%, 10=0.11%, 50=0.91% 00:35:15.282 cpu : usr=2.30%, sys=2.60%, ctx=883, majf=0, minf=1 00:35:15.282 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.282 issued rwts: total=369,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.282 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:15.282 00:35:15.282 Run status group 0 (all jobs): 00:35:15.282 READ: bw=1475KiB/s (1510kB/s), 1475KiB/s-1475KiB/s (1510kB/s-1510kB/s), io=1476KiB (1511kB), run=1001-1001msec 00:35:15.282 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:35:15.282 00:35:15.282 Disk stats (read/write): 00:35:15.282 nvme0n1: ios=293/512, merge=0/0, ticks=1540/205, in_queue=1745, util=98.80% 00:35:15.282 16:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:15.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:35:15.282 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:15.282 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:35:15.282 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:15.282 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:15.282 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:15.282 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:15.282 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:35:15.282 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:15.282 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:35:15.282 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:15.282 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:35:15.282 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:15.282 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:35:15.282 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:15.282 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:15.282 rmmod nvme_tcp 00:35:15.282 rmmod nvme_fabrics 00:35:15.282 rmmod nvme_keyring 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1549169 ']' 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1549169 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1549169 ']' 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1549169 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1549169 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1549169' 00:35:15.543 killing process with pid 1549169 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1549169 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1549169 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:15.543 16:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:18.087 00:35:18.087 real 0m15.814s 00:35:18.087 user 0m38.601s 00:35:18.087 sys 0m7.517s 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:18.087 ************************************ 00:35:18.087 END TEST nvmf_nmic 00:35:18.087 ************************************ 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:18.087 ************************************ 00:35:18.087 START TEST nvmf_fio_target 00:35:18.087 ************************************ 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:18.087 * Looking for test storage... 00:35:18.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:18.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.087 --rc genhtml_branch_coverage=1 00:35:18.087 --rc genhtml_function_coverage=1 00:35:18.087 --rc genhtml_legend=1 00:35:18.087 --rc geninfo_all_blocks=1 00:35:18.087 --rc geninfo_unexecuted_blocks=1 00:35:18.087 00:35:18.087 ' 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:18.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.087 --rc genhtml_branch_coverage=1 00:35:18.087 --rc genhtml_function_coverage=1 00:35:18.087 --rc genhtml_legend=1 00:35:18.087 --rc geninfo_all_blocks=1 00:35:18.087 --rc geninfo_unexecuted_blocks=1 00:35:18.087 00:35:18.087 ' 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:18.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.087 --rc genhtml_branch_coverage=1 00:35:18.087 --rc genhtml_function_coverage=1 00:35:18.087 --rc genhtml_legend=1 00:35:18.087 --rc geninfo_all_blocks=1 00:35:18.087 --rc geninfo_unexecuted_blocks=1 00:35:18.087 00:35:18.087 ' 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:18.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.087 --rc genhtml_branch_coverage=1 00:35:18.087 --rc genhtml_function_coverage=1 00:35:18.087 --rc genhtml_legend=1 00:35:18.087 --rc geninfo_all_blocks=1 00:35:18.087 --rc geninfo_unexecuted_blocks=1 00:35:18.087 00:35:18.087 ' 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:18.087 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:18.088 16:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:26.227 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:26.227 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.227 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:26.228 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:26.228 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:26.228 16:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:26.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:26.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:35:26.228 00:35:26.228 --- 10.0.0.2 ping statistics --- 00:35:26.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.228 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:26.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:26.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:35:26.228 00:35:26.228 --- 10.0.0.1 ping statistics --- 00:35:26.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.228 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1554721 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1554721 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1554721 ']' 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:26.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:26.228 16:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:26.228 [2024-11-20 16:30:01.425013] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:26.228 [2024-11-20 16:30:01.426344] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:35:26.228 [2024-11-20 16:30:01.426395] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:26.228 [2024-11-20 16:30:01.529705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:26.228 [2024-11-20 16:30:01.583897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:26.228 [2024-11-20 16:30:01.583955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:26.228 [2024-11-20 16:30:01.583963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:26.228 [2024-11-20 16:30:01.583971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:26.228 [2024-11-20 16:30:01.583977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:26.228 [2024-11-20 16:30:01.586036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:26.228 [2024-11-20 16:30:01.586214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:26.228 [2024-11-20 16:30:01.586309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:26.228 [2024-11-20 16:30:01.586310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:26.228 [2024-11-20 16:30:01.666712] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:26.228 [2024-11-20 16:30:01.667717] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:26.229 [2024-11-20 16:30:01.667963] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:26.229 [2024-11-20 16:30:01.668423] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:26.229 [2024-11-20 16:30:01.668471] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:26.489 16:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:26.489 16:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:35:26.489 16:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:26.489 16:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:26.489 16:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:26.489 16:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:26.489 16:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:26.756 [2024-11-20 16:30:02.459577] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:26.756 16:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:27.154 16:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:35:27.154 16:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:27.154 16:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:35:27.154 16:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:27.446 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:35:27.446 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:27.446 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:35:27.446 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:35:27.707 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:27.968 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:35:27.968 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:28.230 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:35:28.230 16:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:28.230 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:35:28.230 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:35:28.491 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:28.752 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:28.752 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:28.752 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:28.752 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:29.013 16:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:29.274 [2024-11-20 16:30:05.015522] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:29.274 16:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:35:29.536 16:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:35:29.536 16:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:30.107 16:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:35:30.107 16:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:35:30.107 16:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:30.107 16:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:35:30.107 16:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:35:30.107 16:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:35:32.023 16:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:32.023 16:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:32.023 16:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:32.023 16:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:35:32.023 16:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:32.023 16:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:35:32.023 16:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:32.283 [global] 00:35:32.283 thread=1 00:35:32.283 invalidate=1 00:35:32.283 rw=write 00:35:32.283 time_based=1 00:35:32.284 runtime=1 00:35:32.284 ioengine=libaio 00:35:32.284 direct=1 00:35:32.284 bs=4096 00:35:32.284 iodepth=1 00:35:32.284 norandommap=0 00:35:32.284 numjobs=1 00:35:32.284 00:35:32.284 verify_dump=1 00:35:32.284 verify_backlog=512 00:35:32.284 verify_state_save=0 00:35:32.284 do_verify=1 00:35:32.284 verify=crc32c-intel 00:35:32.284 [job0] 00:35:32.284 filename=/dev/nvme0n1 00:35:32.284 [job1] 00:35:32.284 filename=/dev/nvme0n2 00:35:32.284 [job2] 00:35:32.284 filename=/dev/nvme0n3 00:35:32.284 [job3] 00:35:32.284 filename=/dev/nvme0n4 00:35:32.284 Could not set queue depth (nvme0n1) 00:35:32.284 Could not set queue depth (nvme0n2) 00:35:32.284 Could not set queue depth (nvme0n3) 00:35:32.284 Could not set queue depth (nvme0n4) 00:35:32.545 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:32.545 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:32.545 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:32.545 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:32.545 fio-3.35 00:35:32.545 Starting 4 threads 00:35:33.925 00:35:33.925 job0: (groupid=0, jobs=1): err= 0: pid=1556193: Wed Nov 20 16:30:09 2024 00:35:33.925 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:33.925 slat (nsec): min=7086, max=56763, avg=22887.25, stdev=8391.54 00:35:33.925 clat (usec): min=315, max=41845, avg=1311.59, stdev=4688.69 00:35:33.925 lat (usec): min=325, max=41856, avg=1334.48, stdev=4688.93 00:35:33.925 clat percentiles (usec): 00:35:33.925 | 1.00th=[ 553], 5.00th=[ 635], 10.00th=[ 660], 20.00th=[ 701], 00:35:33.925 | 30.00th=[ 725], 40.00th=[ 742], 50.00th=[ 766], 60.00th=[ 783], 00:35:33.925 | 70.00th=[ 799], 80.00th=[ 832], 90.00th=[ 873], 95.00th=[ 914], 00:35:33.925 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:35:33.925 | 99.99th=[41681] 00:35:33.925 write: IOPS=712, BW=2849KiB/s (2918kB/s)(2852KiB/1001msec); 0 zone resets 00:35:33.925 slat (nsec): min=9773, max=71284, avg=26229.43, stdev=12846.43 00:35:33.925 clat (usec): min=176, max=901, avg=406.63, stdev=94.56 00:35:33.925 lat (usec): min=217, max=942, avg=432.86, stdev=98.54 00:35:33.925 clat percentiles (usec): 00:35:33.925 | 1.00th=[ 231], 5.00th=[ 277], 10.00th=[ 297], 20.00th=[ 330], 00:35:33.925 | 30.00th=[ 351], 40.00th=[ 367], 50.00th=[ 392], 60.00th=[ 424], 00:35:33.925 | 70.00th=[ 453], 80.00th=[ 482], 90.00th=[ 529], 95.00th=[ 570], 00:35:33.925 | 99.00th=[ 676], 99.50th=[ 725], 99.90th=[ 906], 99.95th=[ 906], 00:35:33.925 | 99.99th=[ 906] 00:35:33.925 bw ( KiB/s): min= 4096, max= 4096, per=40.62%, avg=4096.00, stdev= 0.00, samples=1 00:35:33.925 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:33.925 lat (usec) : 250=1.22%, 500=48.33%, 750=27.02%, 1000=22.78% 00:35:33.925 lat (msec) : 2=0.08%, 50=0.57% 00:35:33.925 cpu : usr=1.90%, sys=2.80%, ctx=1227, majf=0, minf=1 00:35:33.925 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:33.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.925 issued rwts: total=512,713,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.926 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:33.926 job1: (groupid=0, jobs=1): err= 0: pid=1556211: Wed Nov 20 16:30:09 2024 00:35:33.926 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:33.926 slat (nsec): min=26036, max=62958, avg=27167.58, stdev=2882.41 00:35:33.926 clat (usec): min=713, max=1426, avg=1144.13, stdev=117.28 00:35:33.926 lat (usec): min=740, max=1452, avg=1171.30, stdev=117.19 00:35:33.926 clat percentiles (usec): 00:35:33.926 | 1.00th=[ 807], 5.00th=[ 914], 10.00th=[ 979], 20.00th=[ 1057], 00:35:33.926 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1172], 60.00th=[ 1188], 00:35:33.926 | 70.00th=[ 1221], 80.00th=[ 1237], 90.00th=[ 1270], 95.00th=[ 1303], 00:35:33.926 | 99.00th=[ 1369], 99.50th=[ 1385], 99.90th=[ 1434], 99.95th=[ 1434], 00:35:33.926 | 99.99th=[ 1434] 00:35:33.926 write: IOPS=629, BW=2517KiB/s (2578kB/s)(2520KiB/1001msec); 0 zone resets 00:35:33.926 slat (nsec): min=5024, max=70934, avg=28932.19, stdev=11253.25 00:35:33.926 clat (usec): min=133, max=1044, avg=591.67, stdev=179.82 00:35:33.926 lat (usec): min=139, max=1079, avg=620.60, stdev=186.16 00:35:33.926 clat percentiles (usec): 00:35:33.926 | 1.00th=[ 210], 5.00th=[ 297], 10.00th=[ 347], 20.00th=[ 429], 00:35:33.926 | 30.00th=[ 494], 40.00th=[ 553], 50.00th=[ 603], 60.00th=[ 652], 00:35:33.926 | 70.00th=[ 701], 80.00th=[ 742], 90.00th=[ 816], 95.00th=[ 898], 00:35:33.926 | 99.00th=[ 988], 99.50th=[ 1004], 99.90th=[ 1045], 99.95th=[ 1045], 00:35:33.926 | 99.99th=[ 1045] 00:35:33.926 bw ( KiB/s): min= 4096, max= 4096, per=40.62%, avg=4096.00, stdev= 0.00, samples=1 00:35:33.926 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:33.926 lat (usec) : 250=1.23%, 500=15.94%, 750=28.11%, 1000=15.32% 00:35:33.926 lat (msec) : 2=39.40% 00:35:33.926 cpu : usr=1.80%, sys=3.10%, ctx=1144, majf=0, minf=1 00:35:33.926 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:33.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.926 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.926 issued rwts: total=512,630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.926 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:33.926 job2: (groupid=0, jobs=1): err= 0: pid=1556229: Wed Nov 20 16:30:09 2024 00:35:33.926 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:33.926 slat (nsec): min=3362, max=15000, avg=9387.31, stdev=613.23 00:35:33.926 clat (usec): min=739, max=1498, avg=1189.58, stdev=103.49 00:35:33.926 lat (usec): min=749, max=1507, avg=1198.97, stdev=103.42 00:35:33.926 clat percentiles (usec): 00:35:33.926 | 1.00th=[ 914], 5.00th=[ 1004], 10.00th=[ 1057], 20.00th=[ 1106], 00:35:33.926 | 30.00th=[ 1156], 40.00th=[ 1188], 50.00th=[ 1205], 60.00th=[ 1221], 00:35:33.926 | 70.00th=[ 1237], 80.00th=[ 1270], 90.00th=[ 1303], 95.00th=[ 1336], 00:35:33.926 | 99.00th=[ 1385], 99.50th=[ 1418], 99.90th=[ 1500], 99.95th=[ 1500], 00:35:33.926 | 99.99th=[ 1500] 00:35:33.926 write: IOPS=703, BW=2813KiB/s (2881kB/s)(2816KiB/1001msec); 0 zone resets 00:35:33.926 slat (nsec): min=3530, max=46261, avg=11297.66, stdev=5274.08 00:35:33.926 clat (usec): min=231, max=1128, avg=532.56, stdev=142.66 00:35:33.926 lat (usec): min=244, max=1140, avg=543.85, stdev=143.94 00:35:33.926 clat percentiles (usec): 00:35:33.926 | 1.00th=[ 293], 5.00th=[ 338], 10.00th=[ 363], 20.00th=[ 416], 00:35:33.926 | 30.00th=[ 449], 40.00th=[ 469], 50.00th=[ 498], 60.00th=[ 545], 00:35:33.926 | 70.00th=[ 603], 80.00th=[ 668], 90.00th=[ 734], 95.00th=[ 791], 00:35:33.926 | 99.00th=[ 906], 99.50th=[ 930], 99.90th=[ 1123], 99.95th=[ 1123], 00:35:33.926 | 99.99th=[ 1123] 00:35:33.926 bw ( KiB/s): min= 4096, max= 4096, per=40.62%, avg=4096.00, stdev= 0.00, samples=1 00:35:33.926 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:33.926 lat (usec) : 250=0.16%, 500=28.95%, 750=23.60%, 1000=7.15% 00:35:33.926 lat (msec) : 2=40.13% 00:35:33.926 cpu : usr=0.50%, sys=1.10%, ctx=1219, majf=0, minf=1 00:35:33.926 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:33.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.926 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.926 issued rwts: total=512,704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.926 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:33.926 job3: (groupid=0, jobs=1): err= 0: pid=1556235: Wed Nov 20 16:30:09 2024 00:35:33.926 read: IOPS=18, BW=74.9KiB/s (76.7kB/s)(76.0KiB/1015msec) 00:35:33.926 slat (nsec): min=27621, max=28520, avg=27976.32, stdev=227.15 00:35:33.926 clat (usec): min=1250, max=42084, avg=39734.54, stdev=9323.28 00:35:33.926 lat (usec): min=1277, max=42113, avg=39762.51, stdev=9323.31 00:35:33.926 clat percentiles (usec): 00:35:33.926 | 1.00th=[ 1254], 5.00th=[ 1254], 10.00th=[41157], 20.00th=[41681], 00:35:33.926 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:35:33.926 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:33.926 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:33.926 | 99.99th=[42206] 00:35:33.926 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:35:33.926 slat (nsec): min=11302, max=59817, avg=27692.10, stdev=11134.48 00:35:33.926 clat (usec): min=178, max=1820, avg=471.56, stdev=175.64 00:35:33.926 lat (usec): min=194, max=1860, avg=499.25, stdev=178.68 00:35:33.926 clat percentiles (usec): 00:35:33.926 | 1.00th=[ 217], 5.00th=[ 269], 10.00th=[ 297], 20.00th=[ 330], 00:35:33.926 | 30.00th=[ 359], 40.00th=[ 392], 50.00th=[ 424], 60.00th=[ 474], 00:35:33.926 | 70.00th=[ 529], 80.00th=[ 619], 90.00th=[ 717], 95.00th=[ 816], 00:35:33.926 | 99.00th=[ 906], 99.50th=[ 930], 99.90th=[ 1827], 99.95th=[ 1827], 00:35:33.926 | 99.99th=[ 1827] 00:35:33.926 bw ( KiB/s): min= 4096, max= 4096, per=40.62%, avg=4096.00, stdev= 0.00, samples=1 00:35:33.926 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:33.926 lat (usec) : 250=3.39%, 500=59.70%, 750=25.61%, 1000=7.53% 00:35:33.926 lat (msec) : 2=0.38%, 50=3.39% 00:35:33.926 cpu : usr=0.99%, sys=1.18%, ctx=532, majf=0, minf=1 00:35:33.926 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:33.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.926 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.926 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.926 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:33.926 00:35:33.926 Run status group 0 (all jobs): 00:35:33.926 READ: bw=6128KiB/s (6275kB/s), 74.9KiB/s-2046KiB/s (76.7kB/s-2095kB/s), io=6220KiB (6369kB), run=1001-1015msec 00:35:33.926 WRITE: bw=9.85MiB/s (10.3MB/s), 2018KiB/s-2849KiB/s (2066kB/s-2918kB/s), io=10.00MiB (10.5MB), run=1001-1015msec 00:35:33.926 00:35:33.926 Disk stats (read/write): 00:35:33.926 nvme0n1: ios=455/512, merge=0/0, ticks=684/206, in_queue=890, util=86.97% 00:35:33.926 nvme0n2: ios=483/512, merge=0/0, ticks=585/295, in_queue=880, util=90.81% 00:35:33.926 nvme0n3: ios=535/512, merge=0/0, ticks=689/254, in_queue=943, util=95.03% 00:35:33.926 nvme0n4: ios=37/512, merge=0/0, ticks=1426/232, in_queue=1658, util=94.22% 00:35:33.926 16:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:35:33.926 [global] 00:35:33.926 thread=1 00:35:33.926 invalidate=1 00:35:33.926 rw=randwrite 00:35:33.926 time_based=1 00:35:33.926 runtime=1 00:35:33.926 ioengine=libaio 00:35:33.926 direct=1 00:35:33.926 bs=4096 00:35:33.926 iodepth=1 00:35:33.926 norandommap=0 00:35:33.926 numjobs=1 00:35:33.926 00:35:33.926 verify_dump=1 00:35:33.926 verify_backlog=512 00:35:33.926 verify_state_save=0 00:35:33.926 do_verify=1 00:35:33.926 verify=crc32c-intel 00:35:33.926 [job0] 00:35:33.926 filename=/dev/nvme0n1 00:35:33.926 [job1] 00:35:33.926 filename=/dev/nvme0n2 00:35:33.926 [job2] 00:35:33.926 filename=/dev/nvme0n3 00:35:33.926 [job3] 00:35:33.926 filename=/dev/nvme0n4 00:35:33.926 Could not set queue depth (nvme0n1) 00:35:33.926 Could not set queue depth (nvme0n2) 00:35:33.926 Could not set queue depth (nvme0n3) 00:35:33.926 Could not set queue depth (nvme0n4) 00:35:34.187 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:34.187 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:34.187 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:34.187 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:34.187 fio-3.35 00:35:34.187 Starting 4 threads 00:35:35.575 00:35:35.575 job0: (groupid=0, jobs=1): err= 0: pid=1556948: Wed Nov 20 16:30:11 2024 00:35:35.575 read: IOPS=385, BW=1541KiB/s (1578kB/s)(1564KiB/1015msec) 00:35:35.575 slat (nsec): min=7924, max=57859, avg=28615.73, stdev=6570.81 00:35:35.575 clat (usec): min=437, max=41991, avg=1732.94, stdev=5809.35 00:35:35.575 lat (usec): min=465, max=42018, avg=1761.56, stdev=5808.84 00:35:35.575 clat percentiles (usec): 00:35:35.575 | 1.00th=[ 465], 5.00th=[ 594], 10.00th=[ 676], 20.00th=[ 766], 00:35:35.575 | 30.00th=[ 832], 40.00th=[ 881], 50.00th=[ 930], 60.00th=[ 955], 00:35:35.575 | 70.00th=[ 996], 80.00th=[ 1037], 90.00th=[ 1090], 95.00th=[ 1156], 00:35:35.575 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:35.575 | 99.99th=[42206] 00:35:35.575 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:35:35.575 slat (nsec): min=9327, max=81730, avg=32301.72, stdev=9735.57 00:35:35.575 clat (usec): min=164, max=2378, avg=588.27, stdev=185.58 00:35:35.575 lat (usec): min=198, max=2390, avg=620.58, stdev=186.49 00:35:35.575 clat percentiles (usec): 00:35:35.575 | 1.00th=[ 265], 5.00th=[ 338], 10.00th=[ 379], 20.00th=[ 445], 00:35:35.575 | 30.00th=[ 502], 40.00th=[ 553], 50.00th=[ 586], 60.00th=[ 627], 00:35:35.575 | 70.00th=[ 668], 80.00th=[ 725], 90.00th=[ 775], 95.00th=[ 840], 00:35:35.575 | 99.00th=[ 938], 99.50th=[ 1680], 99.90th=[ 2376], 99.95th=[ 2376], 00:35:35.575 | 99.99th=[ 2376] 00:35:35.575 bw ( KiB/s): min= 4096, max= 4096, per=43.31%, avg=4096.00, stdev= 0.00, samples=1 00:35:35.575 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:35.575 lat (usec) : 250=0.33%, 500=17.50%, 750=38.87%, 1000=30.56% 00:35:35.575 lat (msec) : 2=11.74%, 4=0.11%, 50=0.89% 00:35:35.575 cpu : usr=1.97%, sys=3.45%, ctx=906, majf=0, minf=1 00:35:35.575 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:35.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.575 issued rwts: total=391,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:35.575 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:35.575 job1: (groupid=0, jobs=1): err= 0: pid=1556976: Wed Nov 20 16:30:11 2024 00:35:35.575 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:35.575 slat (nsec): min=6667, max=61464, avg=25442.86, stdev=6684.92 00:35:35.575 clat (usec): min=490, max=1200, avg=822.29, stdev=142.49 00:35:35.575 lat (usec): min=533, max=1227, avg=847.73, stdev=143.93 00:35:35.575 clat percentiles (usec): 00:35:35.575 | 1.00th=[ 570], 5.00th=[ 619], 10.00th=[ 668], 20.00th=[ 709], 00:35:35.575 | 30.00th=[ 734], 40.00th=[ 758], 50.00th=[ 783], 60.00th=[ 824], 00:35:35.575 | 70.00th=[ 889], 80.00th=[ 963], 90.00th=[ 1037], 95.00th=[ 1090], 00:35:35.575 | 99.00th=[ 1156], 99.50th=[ 1172], 99.90th=[ 1205], 99.95th=[ 1205], 00:35:35.575 | 99.99th=[ 1205] 00:35:35.575 write: IOPS=910, BW=3640KiB/s (3728kB/s)(3644KiB/1001msec); 0 zone resets 00:35:35.575 slat (nsec): min=8897, max=66997, avg=28670.57, stdev=10117.13 00:35:35.575 clat (usec): min=139, max=1106, avg=581.01, stdev=165.68 00:35:35.575 lat (usec): min=149, max=1141, avg=609.68, stdev=167.04 00:35:35.575 clat percentiles (usec): 00:35:35.575 | 1.00th=[ 255], 5.00th=[ 351], 10.00th=[ 379], 20.00th=[ 424], 00:35:35.575 | 30.00th=[ 486], 40.00th=[ 523], 50.00th=[ 562], 60.00th=[ 603], 00:35:35.575 | 70.00th=[ 668], 80.00th=[ 725], 90.00th=[ 816], 95.00th=[ 881], 00:35:35.575 | 99.00th=[ 963], 99.50th=[ 996], 99.90th=[ 1106], 99.95th=[ 1106], 00:35:35.575 | 99.99th=[ 1106] 00:35:35.575 bw ( KiB/s): min= 4096, max= 4096, per=43.31%, avg=4096.00, stdev= 0.00, samples=1 00:35:35.575 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:35.576 lat (usec) : 250=0.56%, 500=21.86%, 750=44.34%, 1000=27.41% 00:35:35.576 lat (msec) : 2=5.83% 00:35:35.576 cpu : usr=2.20%, sys=5.60%, ctx=1423, majf=0, minf=1 00:35:35.576 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:35.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.576 issued rwts: total=512,911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:35.576 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:35.576 job2: (groupid=0, jobs=1): err= 0: pid=1556997: Wed Nov 20 16:30:11 2024 00:35:35.576 read: IOPS=294, BW=1178KiB/s (1206kB/s)(1180KiB/1002msec) 00:35:35.576 slat (nsec): min=24638, max=56446, avg=26510.73, stdev=3679.50 00:35:35.576 clat (usec): min=940, max=42014, avg=2146.60, stdev=6132.29 00:35:35.576 lat (usec): min=966, max=42040, avg=2173.11, stdev=6132.19 00:35:35.576 clat percentiles (usec): 00:35:35.576 | 1.00th=[ 963], 5.00th=[ 1037], 10.00th=[ 1074], 20.00th=[ 1139], 00:35:35.576 | 30.00th=[ 1156], 40.00th=[ 1188], 50.00th=[ 1205], 60.00th=[ 1221], 00:35:35.576 | 70.00th=[ 1237], 80.00th=[ 1270], 90.00th=[ 1303], 95.00th=[ 1336], 00:35:35.576 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:35:35.576 | 99.99th=[42206] 00:35:35.576 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:35:35.576 slat (nsec): min=9512, max=68190, avg=28533.23, stdev=9383.40 00:35:35.576 clat (usec): min=213, max=996, avg=662.39, stdev=132.73 00:35:35.576 lat (usec): min=224, max=1029, avg=690.92, stdev=136.99 00:35:35.576 clat percentiles (usec): 00:35:35.576 | 1.00th=[ 363], 5.00th=[ 408], 10.00th=[ 486], 20.00th=[ 545], 00:35:35.576 | 30.00th=[ 603], 40.00th=[ 635], 50.00th=[ 668], 60.00th=[ 717], 00:35:35.576 | 70.00th=[ 750], 80.00th=[ 775], 90.00th=[ 816], 95.00th=[ 857], 00:35:35.576 | 99.00th=[ 938], 99.50th=[ 988], 99.90th=[ 996], 99.95th=[ 996], 00:35:35.576 | 99.99th=[ 996] 00:35:35.576 bw ( KiB/s): min= 4096, max= 4096, per=43.31%, avg=4096.00, stdev= 0.00, samples=1 00:35:35.576 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:35.576 lat (usec) : 250=0.12%, 500=7.31%, 750=37.79%, 1000=18.71% 00:35:35.576 lat (msec) : 2=35.19%, 50=0.87% 00:35:35.576 cpu : usr=1.50%, sys=2.00%, ctx=807, majf=0, minf=2 00:35:35.576 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:35.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.576 issued rwts: total=295,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:35.576 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:35.576 job3: (groupid=0, jobs=1): err= 0: pid=1557004: Wed Nov 20 16:30:11 2024 00:35:35.576 read: IOPS=19, BW=77.3KiB/s (79.1kB/s)(80.0KiB/1035msec) 00:35:35.576 slat (nsec): min=28051, max=33136, avg=28977.95, stdev=1456.17 00:35:35.576 clat (usec): min=801, max=44996, avg=35843.77, stdev=15049.39 00:35:35.576 lat (usec): min=830, max=45029, avg=35872.75, stdev=15049.60 00:35:35.576 clat percentiles (usec): 00:35:35.576 | 1.00th=[ 799], 5.00th=[ 799], 10.00th=[ 898], 20.00th=[41157], 00:35:35.576 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:35:35.576 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:35.576 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:35:35.576 | 99.99th=[44827] 00:35:35.576 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:35:35.576 slat (nsec): min=9312, max=79255, avg=33549.01, stdev=8503.48 00:35:35.576 clat (usec): min=207, max=2043, avg=574.59, stdev=183.98 00:35:35.576 lat (usec): min=240, max=2078, avg=608.14, stdev=185.28 00:35:35.576 clat percentiles (usec): 00:35:35.576 | 1.00th=[ 273], 5.00th=[ 306], 10.00th=[ 338], 20.00th=[ 400], 00:35:35.576 | 30.00th=[ 515], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 603], 00:35:35.576 | 70.00th=[ 635], 80.00th=[ 701], 90.00th=[ 783], 95.00th=[ 857], 00:35:35.576 | 99.00th=[ 955], 99.50th=[ 971], 99.90th=[ 2040], 99.95th=[ 2040], 00:35:35.576 | 99.99th=[ 2040] 00:35:35.576 bw ( KiB/s): min= 4096, max= 4096, per=43.31%, avg=4096.00, stdev= 0.00, samples=1 00:35:35.576 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:35.576 lat (usec) : 250=0.56%, 500=26.88%, 750=56.02%, 1000=12.78% 00:35:35.576 lat (msec) : 2=0.19%, 4=0.38%, 50=3.20% 00:35:35.576 cpu : usr=1.06%, sys=2.13%, ctx=536, majf=0, minf=1 00:35:35.576 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:35.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.576 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:35.576 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:35.576 00:35:35.576 Run status group 0 (all jobs): 00:35:35.576 READ: bw=4707KiB/s (4820kB/s), 77.3KiB/s-2046KiB/s (79.1kB/s-2095kB/s), io=4872KiB (4989kB), run=1001-1035msec 00:35:35.576 WRITE: bw=9457KiB/s (9684kB/s), 1979KiB/s-3640KiB/s (2026kB/s-3728kB/s), io=9788KiB (10.0MB), run=1001-1035msec 00:35:35.576 00:35:35.576 Disk stats (read/write): 00:35:35.576 nvme0n1: ios=388/512, merge=0/0, ticks=601/266, in_queue=867, util=91.68% 00:35:35.576 nvme0n2: ios=539/631, merge=0/0, ticks=446/324, in_queue=770, util=90.82% 00:35:35.576 nvme0n3: ios=275/512, merge=0/0, ticks=531/329, in_queue=860, util=92.07% 00:35:35.576 nvme0n4: ios=63/512, merge=0/0, ticks=1008/238, in_queue=1246, util=99.14% 00:35:35.576 16:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:35:35.576 [global] 00:35:35.576 thread=1 00:35:35.576 invalidate=1 00:35:35.576 rw=write 00:35:35.576 time_based=1 00:35:35.576 runtime=1 00:35:35.576 ioengine=libaio 00:35:35.576 direct=1 00:35:35.576 bs=4096 00:35:35.576 iodepth=128 00:35:35.576 norandommap=0 00:35:35.576 numjobs=1 00:35:35.576 00:35:35.576 verify_dump=1 00:35:35.576 verify_backlog=512 00:35:35.576 verify_state_save=0 00:35:35.576 do_verify=1 00:35:35.576 verify=crc32c-intel 00:35:35.576 [job0] 00:35:35.576 filename=/dev/nvme0n1 00:35:35.576 [job1] 00:35:35.576 filename=/dev/nvme0n2 00:35:35.576 [job2] 00:35:35.576 filename=/dev/nvme0n3 00:35:35.576 [job3] 00:35:35.576 filename=/dev/nvme0n4 00:35:35.576 Could not set queue depth (nvme0n1) 00:35:35.576 Could not set queue depth (nvme0n2) 00:35:35.576 Could not set queue depth (nvme0n3) 00:35:35.576 Could not set queue depth (nvme0n4) 00:35:35.837 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:35.837 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:35.837 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:35.837 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:35.837 fio-3.35 00:35:35.837 Starting 4 threads 00:35:37.220 00:35:37.220 job0: (groupid=0, jobs=1): err= 0: pid=1557602: Wed Nov 20 16:30:12 2024 00:35:37.220 read: IOPS=9647, BW=37.7MiB/s (39.5MB/s)(37.9MiB/1005msec) 00:35:37.220 slat (nsec): min=948, max=7187.8k, avg=50936.45, stdev=390714.22 00:35:37.220 clat (usec): min=1847, max=17317, avg=6950.42, stdev=2198.37 00:35:37.220 lat (usec): min=1859, max=17320, avg=7001.36, stdev=2217.77 00:35:37.220 clat percentiles (usec): 00:35:37.220 | 1.00th=[ 3490], 5.00th=[ 4113], 10.00th=[ 4752], 20.00th=[ 5211], 00:35:37.220 | 30.00th=[ 5538], 40.00th=[ 5997], 50.00th=[ 6587], 60.00th=[ 7111], 00:35:37.220 | 70.00th=[ 7701], 80.00th=[ 8586], 90.00th=[ 9634], 95.00th=[10945], 00:35:37.220 | 99.00th=[15008], 99.50th=[15401], 99.90th=[17171], 99.95th=[17171], 00:35:37.220 | 99.99th=[17433] 00:35:37.220 write: IOPS=9679, BW=37.8MiB/s (39.6MB/s)(38.0MiB/1005msec); 0 zone resets 00:35:37.220 slat (nsec): min=1611, max=7116.6k, avg=47623.62, stdev=336584.51 00:35:37.220 clat (usec): min=1082, max=20532, avg=6174.35, stdev=2548.40 00:35:37.220 lat (usec): min=1090, max=20536, avg=6221.97, stdev=2562.16 00:35:37.220 clat percentiles (usec): 00:35:37.220 | 1.00th=[ 2376], 5.00th=[ 3392], 10.00th=[ 3785], 20.00th=[ 4621], 00:35:37.220 | 30.00th=[ 5276], 40.00th=[ 5538], 50.00th=[ 5669], 60.00th=[ 5800], 00:35:37.220 | 70.00th=[ 5997], 80.00th=[ 7046], 90.00th=[ 9241], 95.00th=[11076], 00:35:37.220 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19006], 99.95th=[20579], 00:35:37.220 | 99.99th=[20579] 00:35:37.220 bw ( KiB/s): min=36864, max=40960, per=44.49%, avg=38912.00, stdev=2896.31, samples=2 00:35:37.220 iops : min= 9216, max=10240, avg=9728.00, stdev=724.08, samples=2 00:35:37.220 lat (msec) : 2=0.26%, 4=7.91%, 10=84.20%, 20=7.59%, 50=0.04% 00:35:37.220 cpu : usr=6.57%, sys=7.37%, ctx=656, majf=0, minf=2 00:35:37.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:35:37.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:37.220 issued rwts: total=9696,9728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.220 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:37.220 job1: (groupid=0, jobs=1): err= 0: pid=1557606: Wed Nov 20 16:30:12 2024 00:35:37.220 read: IOPS=3235, BW=12.6MiB/s (13.3MB/s)(13.2MiB/1048msec) 00:35:37.220 slat (nsec): min=939, max=10266k, avg=115081.98, stdev=692124.25 00:35:37.220 clat (usec): min=3140, max=61204, avg=12779.84, stdev=10860.56 00:35:37.220 lat (usec): min=3148, max=61213, avg=12894.92, stdev=10923.97 00:35:37.220 clat percentiles (usec): 00:35:37.220 | 1.00th=[ 3884], 5.00th=[ 6063], 10.00th=[ 6325], 20.00th=[ 7373], 00:35:37.220 | 30.00th=[ 7701], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 8979], 00:35:37.220 | 70.00th=[11207], 80.00th=[14746], 90.00th=[22676], 95.00th=[45876], 00:35:37.220 | 99.00th=[54264], 99.50th=[54789], 99.90th=[61080], 99.95th=[61080], 00:35:37.220 | 99.99th=[61080] 00:35:37.220 write: IOPS=3419, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1048msec); 0 zone resets 00:35:37.220 slat (nsec): min=1644, max=12690k, avg=166151.90, stdev=827612.85 00:35:37.220 clat (msec): min=2, max=121, avg=25.00, stdev=23.92 00:35:37.220 lat (msec): min=2, max=121, avg=25.17, stdev=24.06 00:35:37.220 clat percentiles (msec): 00:35:37.220 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 8], 00:35:37.220 | 30.00th=[ 13], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 19], 00:35:37.220 | 70.00th=[ 24], 80.00th=[ 39], 90.00th=[ 62], 95.00th=[ 85], 00:35:37.220 | 99.00th=[ 110], 99.50th=[ 114], 99.90th=[ 122], 99.95th=[ 122], 00:35:37.220 | 99.99th=[ 122] 00:35:37.220 bw ( KiB/s): min= 8848, max=19824, per=16.39%, avg=14336.00, stdev=7761.20, samples=2 00:35:37.220 iops : min= 2212, max= 4956, avg=3584.00, stdev=1940.30, samples=2 00:35:37.220 lat (msec) : 4=1.62%, 10=42.21%, 20=33.36%, 50=14.49%, 100=7.43% 00:35:37.220 lat (msec) : 250=0.89% 00:35:37.220 cpu : usr=2.96%, sys=3.06%, ctx=444, majf=0, minf=1 00:35:37.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:35:37.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:37.220 issued rwts: total=3391,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.220 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:37.220 job2: (groupid=0, jobs=1): err= 0: pid=1557621: Wed Nov 20 16:30:12 2024 00:35:37.220 read: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1015msec) 00:35:37.220 slat (nsec): min=964, max=12033k, avg=118063.65, stdev=794411.30 00:35:37.220 clat (usec): min=3598, max=49475, avg=14286.76, stdev=6964.37 00:35:37.220 lat (usec): min=3607, max=49483, avg=14404.83, stdev=7025.37 00:35:37.220 clat percentiles (usec): 00:35:37.220 | 1.00th=[ 6259], 5.00th=[ 7635], 10.00th=[ 7963], 20.00th=[ 9110], 00:35:37.220 | 30.00th=[ 9503], 40.00th=[10421], 50.00th=[12256], 60.00th=[13960], 00:35:37.220 | 70.00th=[16712], 80.00th=[18482], 90.00th=[23462], 95.00th=[27132], 00:35:37.220 | 99.00th=[44827], 99.50th=[46924], 99.90th=[49546], 99.95th=[49546], 00:35:37.220 | 99.99th=[49546] 00:35:37.220 write: IOPS=4001, BW=15.6MiB/s (16.4MB/s)(15.9MiB/1015msec); 0 zone resets 00:35:37.220 slat (nsec): min=1639, max=11557k, avg=134850.21, stdev=700351.07 00:35:37.220 clat (usec): min=712, max=59665, avg=19089.70, stdev=10600.79 00:35:37.220 lat (usec): min=720, max=59674, avg=19224.55, stdev=10656.70 00:35:37.220 clat percentiles (usec): 00:35:37.220 | 1.00th=[ 1680], 5.00th=[ 5997], 10.00th=[ 7570], 20.00th=[10552], 00:35:37.220 | 30.00th=[13173], 40.00th=[15139], 50.00th=[15926], 60.00th=[18482], 00:35:37.220 | 70.00th=[21627], 80.00th=[28443], 90.00th=[36439], 95.00th=[39060], 00:35:37.220 | 99.00th=[49546], 99.50th=[52167], 99.90th=[59507], 99.95th=[59507], 00:35:37.220 | 99.99th=[59507] 00:35:37.220 bw ( KiB/s): min=14632, max=16840, per=17.99%, avg=15736.00, stdev=1561.29, samples=2 00:35:37.220 iops : min= 3658, max= 4210, avg=3934.00, stdev=390.32, samples=2 00:35:37.220 lat (usec) : 750=0.04% 00:35:37.220 lat (msec) : 2=0.58%, 4=1.28%, 10=24.73%, 20=48.78%, 50=24.10% 00:35:37.220 lat (msec) : 100=0.48% 00:35:37.220 cpu : usr=3.06%, sys=4.24%, ctx=402, majf=0, minf=2 00:35:37.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:35:37.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:37.220 issued rwts: total=3584,4062,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.220 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:37.220 job3: (groupid=0, jobs=1): err= 0: pid=1557631: Wed Nov 20 16:30:12 2024 00:35:37.220 read: IOPS=5044, BW=19.7MiB/s (20.7MB/s)(20.0MiB/1015msec) 00:35:37.220 slat (nsec): min=1023, max=15496k, avg=85352.74, stdev=669610.42 00:35:37.220 clat (usec): min=3175, max=40257, avg=11165.17, stdev=5033.12 00:35:37.220 lat (usec): min=3179, max=40259, avg=11250.52, stdev=5075.83 00:35:37.220 clat percentiles (usec): 00:35:37.220 | 1.00th=[ 4490], 5.00th=[ 6325], 10.00th=[ 6652], 20.00th=[ 7177], 00:35:37.220 | 30.00th=[ 8094], 40.00th=[ 8356], 50.00th=[ 9372], 60.00th=[11076], 00:35:37.220 | 70.00th=[12518], 80.00th=[15270], 90.00th=[16909], 95.00th=[20579], 00:35:37.220 | 99.00th=[30802], 99.50th=[34866], 99.90th=[38536], 99.95th=[40109], 00:35:37.220 | 99.99th=[40109] 00:35:37.220 write: IOPS=5460, BW=21.3MiB/s (22.4MB/s)(21.6MiB/1015msec); 0 zone resets 00:35:37.220 slat (nsec): min=1733, max=18144k, avg=92684.79, stdev=605751.60 00:35:37.220 clat (usec): min=744, max=41157, avg=12499.80, stdev=8733.47 00:35:37.220 lat (usec): min=754, max=41160, avg=12592.49, stdev=8782.44 00:35:37.220 clat percentiles (usec): 00:35:37.220 | 1.00th=[ 2704], 5.00th=[ 4752], 10.00th=[ 5211], 20.00th=[ 6456], 00:35:37.220 | 30.00th=[ 7177], 40.00th=[ 7963], 50.00th=[ 9372], 60.00th=[10552], 00:35:37.220 | 70.00th=[12387], 80.00th=[18482], 90.00th=[26870], 95.00th=[34341], 00:35:37.220 | 99.00th=[39584], 99.50th=[39584], 99.90th=[41157], 99.95th=[41157], 00:35:37.220 | 99.99th=[41157] 00:35:37.220 bw ( KiB/s): min=18736, max=24576, per=24.76%, avg=21656.00, stdev=4129.50, samples=2 00:35:37.220 iops : min= 4684, max= 6144, avg=5414.00, stdev=1032.38, samples=2 00:35:37.220 lat (usec) : 750=0.02%, 1000=0.01% 00:35:37.220 lat (msec) : 2=0.19%, 4=1.22%, 10=52.90%, 20=35.41%, 50=10.26% 00:35:37.220 cpu : usr=2.66%, sys=7.30%, ctx=404, majf=0, minf=1 00:35:37.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:35:37.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:37.220 issued rwts: total=5120,5542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.220 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:37.220 00:35:37.221 Run status group 0 (all jobs): 00:35:37.221 READ: bw=81.2MiB/s (85.2MB/s), 12.6MiB/s-37.7MiB/s (13.3MB/s-39.5MB/s), io=85.1MiB (89.3MB), run=1005-1048msec 00:35:37.221 WRITE: bw=85.4MiB/s (89.6MB/s), 13.4MiB/s-37.8MiB/s (14.0MB/s-39.6MB/s), io=89.5MiB (93.9MB), run=1005-1048msec 00:35:37.221 00:35:37.221 Disk stats (read/write): 00:35:37.221 nvme0n1: ios=7730/7967, merge=0/0, ticks=51959/48816, in_queue=100775, util=91.98% 00:35:37.221 nvme0n2: ios=3119/3143, merge=0/0, ticks=33280/68945, in_queue=102225, util=96.63% 00:35:37.221 nvme0n3: ios=3120/3527, merge=0/0, ticks=32491/55715, in_queue=88206, util=92.62% 00:35:37.221 nvme0n4: ios=4231/4608, merge=0/0, ticks=46607/53915, in_queue=100522, util=97.01% 00:35:37.221 16:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:35:37.221 [global] 00:35:37.221 thread=1 00:35:37.221 invalidate=1 00:35:37.221 rw=randwrite 00:35:37.221 time_based=1 00:35:37.221 runtime=1 00:35:37.221 ioengine=libaio 00:35:37.221 direct=1 00:35:37.221 bs=4096 00:35:37.221 iodepth=128 00:35:37.221 norandommap=0 00:35:37.221 numjobs=1 00:35:37.221 00:35:37.221 verify_dump=1 00:35:37.221 verify_backlog=512 00:35:37.221 verify_state_save=0 00:35:37.221 do_verify=1 00:35:37.221 verify=crc32c-intel 00:35:37.221 [job0] 00:35:37.221 filename=/dev/nvme0n1 00:35:37.221 [job1] 00:35:37.221 filename=/dev/nvme0n2 00:35:37.221 [job2] 00:35:37.221 filename=/dev/nvme0n3 00:35:37.221 [job3] 00:35:37.221 filename=/dev/nvme0n4 00:35:37.221 Could not set queue depth (nvme0n1) 00:35:37.221 Could not set queue depth (nvme0n2) 00:35:37.221 Could not set queue depth (nvme0n3) 00:35:37.221 Could not set queue depth (nvme0n4) 00:35:37.791 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:37.791 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:37.791 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:37.791 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:37.791 fio-3.35 00:35:37.791 Starting 4 threads 00:35:39.176 00:35:39.176 job0: (groupid=0, jobs=1): err= 0: pid=1558111: Wed Nov 20 16:30:14 2024 00:35:39.176 read: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec) 00:35:39.176 slat (nsec): min=894, max=4255.5k, avg=67071.09, stdev=382370.31 00:35:39.176 clat (usec): min=4591, max=13240, avg=8504.82, stdev=1237.09 00:35:39.176 lat (usec): min=4899, max=13255, avg=8571.89, stdev=1277.15 00:35:39.176 clat percentiles (usec): 00:35:39.176 | 1.00th=[ 5407], 5.00th=[ 6456], 10.00th=[ 6980], 20.00th=[ 7439], 00:35:39.176 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 8979], 00:35:39.176 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[10421], 00:35:39.176 | 99.00th=[11731], 99.50th=[11994], 99.90th=[12649], 99.95th=[13042], 00:35:39.176 | 99.99th=[13304] 00:35:39.176 write: IOPS=7786, BW=30.4MiB/s (31.9MB/s)(30.5MiB/1003msec); 0 zone resets 00:35:39.176 slat (nsec): min=1492, max=4063.3k, avg=58581.43, stdev=309174.14 00:35:39.176 clat (usec): min=2212, max=12884, avg=7905.43, stdev=1114.50 00:35:39.176 lat (usec): min=2985, max=12896, avg=7964.01, stdev=1141.03 00:35:39.176 clat percentiles (usec): 00:35:39.176 | 1.00th=[ 4686], 5.00th=[ 6063], 10.00th=[ 6718], 20.00th=[ 7046], 00:35:39.176 | 30.00th=[ 7308], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8225], 00:35:39.176 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9241], 95.00th=[ 9372], 00:35:39.176 | 99.00th=[10814], 99.50th=[11207], 99.90th=[12387], 99.95th=[12649], 00:35:39.176 | 99.99th=[12911] 00:35:39.176 bw ( KiB/s): min=28768, max=32768, per=29.42%, avg=30768.00, stdev=2828.43, samples=2 00:35:39.176 iops : min= 7192, max= 8192, avg=7692.00, stdev=707.11, samples=2 00:35:39.176 lat (msec) : 4=0.15%, 10=94.38%, 20=5.46% 00:35:39.176 cpu : usr=4.09%, sys=5.29%, ctx=877, majf=0, minf=1 00:35:39.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:39.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:39.176 issued rwts: total=7680,7810,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:39.176 job1: (groupid=0, jobs=1): err= 0: pid=1558116: Wed Nov 20 16:30:14 2024 00:35:39.176 read: IOPS=7751, BW=30.3MiB/s (31.7MB/s)(30.4MiB/1005msec) 00:35:39.176 slat (nsec): min=950, max=7399.8k, avg=65510.89, stdev=493311.93 00:35:39.176 clat (usec): min=1563, max=17067, avg=8523.92, stdev=2273.98 00:35:39.176 lat (usec): min=1571, max=21069, avg=8589.43, stdev=2303.59 00:35:39.176 clat percentiles (usec): 00:35:39.176 | 1.00th=[ 4228], 5.00th=[ 5735], 10.00th=[ 5997], 20.00th=[ 6652], 00:35:39.176 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 8094], 60.00th=[ 8717], 00:35:39.176 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[11731], 95.00th=[12649], 00:35:39.176 | 99.00th=[15533], 99.50th=[15664], 99.90th=[16319], 99.95th=[16319], 00:35:39.176 | 99.99th=[17171] 00:35:39.176 write: IOPS=8151, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1005msec); 0 zone resets 00:35:39.176 slat (nsec): min=1581, max=6976.8k, avg=55052.62, stdev=369528.65 00:35:39.176 clat (usec): min=1160, max=19865, avg=7460.66, stdev=2334.93 00:35:39.176 lat (usec): min=1169, max=19867, avg=7515.71, stdev=2346.34 00:35:39.176 clat percentiles (usec): 00:35:39.176 | 1.00th=[ 2671], 5.00th=[ 4113], 10.00th=[ 4883], 20.00th=[ 5997], 00:35:39.176 | 30.00th=[ 6456], 40.00th=[ 6718], 50.00th=[ 7242], 60.00th=[ 7767], 00:35:39.176 | 70.00th=[ 8094], 80.00th=[ 8586], 90.00th=[ 9765], 95.00th=[11863], 00:35:39.176 | 99.00th=[16909], 99.50th=[17433], 99.90th=[17957], 99.95th=[19792], 00:35:39.176 | 99.99th=[19792] 00:35:39.176 bw ( KiB/s): min=30128, max=35272, per=31.27%, avg=32700.00, stdev=3637.36, samples=2 00:35:39.176 iops : min= 7532, max= 8818, avg=8175.00, stdev=909.34, samples=2 00:35:39.176 lat (msec) : 2=0.16%, 4=2.23%, 10=82.00%, 20=15.61% 00:35:39.176 cpu : usr=4.58%, sys=8.27%, ctx=608, majf=0, minf=1 00:35:39.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:39.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:39.176 issued rwts: total=7790,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:39.176 job2: (groupid=0, jobs=1): err= 0: pid=1558126: Wed Nov 20 16:30:14 2024 00:35:39.176 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:35:39.176 slat (nsec): min=951, max=10565k, avg=77274.75, stdev=589002.56 00:35:39.176 clat (usec): min=2218, max=32741, avg=9857.21, stdev=3627.57 00:35:39.176 lat (usec): min=2221, max=32749, avg=9934.48, stdev=3666.83 00:35:39.176 clat percentiles (usec): 00:35:39.176 | 1.00th=[ 5473], 5.00th=[ 6456], 10.00th=[ 6980], 20.00th=[ 7504], 00:35:39.176 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 9372], 60.00th=[ 9634], 00:35:39.176 | 70.00th=[10159], 80.00th=[10945], 90.00th=[13829], 95.00th=[16319], 00:35:39.176 | 99.00th=[25560], 99.50th=[30278], 99.90th=[32375], 99.95th=[32637], 00:35:39.176 | 99.99th=[32637] 00:35:39.176 write: IOPS=6687, BW=26.1MiB/s (27.4MB/s)(26.2MiB/1004msec); 0 zone resets 00:35:39.176 slat (nsec): min=1551, max=8046.7k, avg=66935.14, stdev=390625.05 00:35:39.176 clat (usec): min=1143, max=25654, avg=9191.20, stdev=3205.74 00:35:39.176 lat (usec): min=1153, max=25657, avg=9258.13, stdev=3230.70 00:35:39.176 clat percentiles (usec): 00:35:39.176 | 1.00th=[ 3392], 5.00th=[ 4817], 10.00th=[ 5866], 20.00th=[ 7111], 00:35:39.176 | 30.00th=[ 7701], 40.00th=[ 7963], 50.00th=[ 9110], 60.00th=[ 9634], 00:35:39.176 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[13435], 95.00th=[15401], 00:35:39.176 | 99.00th=[21890], 99.50th=[22414], 99.90th=[23987], 99.95th=[24773], 00:35:39.176 | 99.99th=[25560] 00:35:39.176 bw ( KiB/s): min=23640, max=29608, per=25.46%, avg=26624.00, stdev=4220.01, samples=2 00:35:39.176 iops : min= 5910, max= 7402, avg=6656.00, stdev=1055.00, samples=2 00:35:39.176 lat (msec) : 2=0.17%, 4=0.99%, 10=70.02%, 20=26.72%, 50=2.09% 00:35:39.176 cpu : usr=4.89%, sys=6.08%, ctx=621, majf=0, minf=1 00:35:39.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:35:39.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:39.176 issued rwts: total=6656,6714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:39.176 job3: (groupid=0, jobs=1): err= 0: pid=1558129: Wed Nov 20 16:30:14 2024 00:35:39.176 read: IOPS=4273, BW=16.7MiB/s (17.5MB/s)(17.4MiB/1045msec) 00:35:39.176 slat (nsec): min=959, max=17828k, avg=120589.47, stdev=871841.23 00:35:39.176 clat (usec): min=5662, max=73674, avg=16096.70, stdev=12731.05 00:35:39.176 lat (usec): min=5670, max=73681, avg=16217.29, stdev=12810.10 00:35:39.176 clat percentiles (usec): 00:35:39.176 | 1.00th=[ 6521], 5.00th=[ 7111], 10.00th=[ 7832], 20.00th=[ 8586], 00:35:39.176 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10421], 60.00th=[12780], 00:35:39.176 | 70.00th=[14484], 80.00th=[19792], 90.00th=[37487], 95.00th=[47449], 00:35:39.176 | 99.00th=[64226], 99.50th=[69731], 99.90th=[73925], 99.95th=[73925], 00:35:39.176 | 99.99th=[73925] 00:35:39.176 write: IOPS=4409, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1045msec); 0 zone resets 00:35:39.176 slat (nsec): min=1613, max=9408.3k, avg=94392.76, stdev=646045.04 00:35:39.176 clat (usec): min=1236, max=66324, avg=13157.23, stdev=9155.49 00:35:39.176 lat (usec): min=1245, max=66332, avg=13251.63, stdev=9211.21 00:35:39.176 clat percentiles (usec): 00:35:39.176 | 1.00th=[ 5735], 5.00th=[ 6390], 10.00th=[ 7963], 20.00th=[ 8848], 00:35:39.176 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9896], 00:35:39.176 | 70.00th=[10814], 80.00th=[13304], 90.00th=[27657], 95.00th=[30016], 00:35:39.176 | 99.00th=[55313], 99.50th=[65274], 99.90th=[66323], 99.95th=[66323], 00:35:39.176 | 99.99th=[66323] 00:35:39.176 bw ( KiB/s): min=16384, max=20480, per=17.62%, avg=18432.00, stdev=2896.31, samples=2 00:35:39.176 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:35:39.176 lat (msec) : 2=0.02%, 10=53.42%, 20=28.84%, 50=15.02%, 100=2.70% 00:35:39.176 cpu : usr=2.87%, sys=5.17%, ctx=393, majf=0, minf=2 00:35:39.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:35:39.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:39.177 issued rwts: total=4466,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.177 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:39.177 00:35:39.177 Run status group 0 (all jobs): 00:35:39.177 READ: bw=99.4MiB/s (104MB/s), 16.7MiB/s-30.3MiB/s (17.5MB/s-31.7MB/s), io=104MiB (109MB), run=1003-1045msec 00:35:39.177 WRITE: bw=102MiB/s (107MB/s), 17.2MiB/s-31.8MiB/s (18.1MB/s-33.4MB/s), io=107MiB (112MB), run=1003-1045msec 00:35:39.177 00:35:39.177 Disk stats (read/write): 00:35:39.177 nvme0n1: ios=6484/6656, merge=0/0, ticks=23078/20980, in_queue=44058, util=91.88% 00:35:39.177 nvme0n2: ios=6556/6656, merge=0/0, ticks=53019/47724, in_queue=100743, util=92.04% 00:35:39.177 nvme0n3: ios=5580/5632, merge=0/0, ticks=47721/47316, in_queue=95037, util=96.52% 00:35:39.177 nvme0n4: ios=3194/3584, merge=0/0, ticks=26179/24457, in_queue=50636, util=96.47% 00:35:39.177 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:35:39.177 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1558441 00:35:39.177 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:35:39.177 16:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:35:39.177 [global] 00:35:39.177 thread=1 00:35:39.177 invalidate=1 00:35:39.177 rw=read 00:35:39.177 time_based=1 00:35:39.177 runtime=10 00:35:39.177 ioengine=libaio 00:35:39.177 direct=1 00:35:39.177 bs=4096 00:35:39.177 iodepth=1 00:35:39.177 norandommap=1 00:35:39.177 numjobs=1 00:35:39.177 00:35:39.177 [job0] 00:35:39.177 filename=/dev/nvme0n1 00:35:39.177 [job1] 00:35:39.177 filename=/dev/nvme0n2 00:35:39.177 [job2] 00:35:39.177 filename=/dev/nvme0n3 00:35:39.177 [job3] 00:35:39.177 filename=/dev/nvme0n4 00:35:39.177 Could not set queue depth (nvme0n1) 00:35:39.177 Could not set queue depth (nvme0n2) 00:35:39.177 Could not set queue depth (nvme0n3) 00:35:39.177 Could not set queue depth (nvme0n4) 00:35:39.177 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:39.177 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:39.177 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:39.177 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:39.177 fio-3.35 00:35:39.177 Starting 4 threads 00:35:42.479 16:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:35:42.479 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10194944, buflen=4096 00:35:42.479 fio: pid=1558636, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:42.479 16:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:35:42.479 16:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:42.479 16:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:35:42.479 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=2633728, buflen=4096 00:35:42.479 fio: pid=1558635, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:42.479 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=9768960, buflen=4096 00:35:42.479 fio: pid=1558631, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:42.479 16:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:42.479 16:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:35:42.741 16:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:42.741 16:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:35:42.741 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=9572352, buflen=4096 00:35:42.741 fio: pid=1558632, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:42.741 00:35:42.741 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1558631: Wed Nov 20 16:30:18 2024 00:35:42.741 read: IOPS=799, BW=3198KiB/s (3275kB/s)(9540KiB/2983msec) 00:35:42.741 slat (usec): min=7, max=26706, avg=54.92, stdev=793.13 00:35:42.741 clat (usec): min=595, max=42001, avg=1179.83, stdev=1855.49 00:35:42.741 lat (usec): min=621, max=42027, avg=1234.76, stdev=2016.97 00:35:42.741 clat percentiles (usec): 00:35:42.741 | 1.00th=[ 783], 5.00th=[ 898], 10.00th=[ 947], 20.00th=[ 1004], 00:35:42.741 | 30.00th=[ 1045], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1123], 00:35:42.741 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 1254], 00:35:42.741 | 99.00th=[ 1303], 99.50th=[ 1352], 99.90th=[41157], 99.95th=[41157], 00:35:42.741 | 99.99th=[42206] 00:35:42.741 bw ( KiB/s): min= 3456, max= 3648, per=35.68%, avg=3542.40, stdev=77.85, samples=5 00:35:42.741 iops : min= 864, max= 912, avg=885.60, stdev=19.46, samples=5 00:35:42.741 lat (usec) : 750=0.34%, 1000=18.78% 00:35:42.741 lat (msec) : 2=80.60%, 20=0.04%, 50=0.21% 00:35:42.741 cpu : usr=0.94%, sys=2.35%, ctx=2392, majf=0, minf=1 00:35:42.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.741 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.741 issued rwts: total=2386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:42.741 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1558632: Wed Nov 20 16:30:18 2024 00:35:42.741 read: IOPS=738, BW=2954KiB/s (3024kB/s)(9348KiB/3165msec) 00:35:42.741 slat (usec): min=6, max=13045, avg=33.61, stdev=303.57 00:35:42.741 clat (usec): min=310, max=41075, avg=1304.97, stdev=3293.81 00:35:42.741 lat (usec): min=336, max=41101, avg=1338.58, stdev=3308.62 00:35:42.741 clat percentiles (usec): 00:35:42.741 | 1.00th=[ 693], 5.00th=[ 807], 10.00th=[ 857], 20.00th=[ 938], 00:35:42.741 | 30.00th=[ 979], 40.00th=[ 1020], 50.00th=[ 1045], 60.00th=[ 1074], 00:35:42.741 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1221], 00:35:42.741 | 99.00th=[ 1418], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:42.741 | 99.99th=[41157] 00:35:42.741 bw ( KiB/s): min= 1077, max= 3984, per=31.06%, avg=3083.50, stdev=1188.52, samples=6 00:35:42.741 iops : min= 269, max= 996, avg=770.83, stdev=297.21, samples=6 00:35:42.741 lat (usec) : 500=0.13%, 750=1.88%, 1000=33.28% 00:35:42.741 lat (msec) : 2=63.99%, 50=0.68% 00:35:42.741 cpu : usr=0.70%, sys=2.24%, ctx=2342, majf=0, minf=2 00:35:42.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.741 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.741 issued rwts: total=2338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:42.741 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1558635: Wed Nov 20 16:30:18 2024 00:35:42.741 read: IOPS=228, BW=914KiB/s (936kB/s)(2572KiB/2813msec) 00:35:42.741 slat (usec): min=8, max=21364, avg=77.72, stdev=943.31 00:35:42.741 clat (usec): min=531, max=42111, avg=4256.12, stdev=10751.76 00:35:42.741 lat (usec): min=559, max=42139, avg=4333.92, stdev=10778.92 00:35:42.741 clat percentiles (usec): 00:35:42.741 | 1.00th=[ 807], 5.00th=[ 938], 10.00th=[ 996], 20.00th=[ 1057], 00:35:42.741 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1156], 00:35:42.741 | 70.00th=[ 1205], 80.00th=[ 1303], 90.00th=[ 1418], 95.00th=[41157], 00:35:42.741 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:35:42.741 | 99.99th=[42206] 00:35:42.741 bw ( KiB/s): min= 784, max= 1320, per=9.67%, avg=960.00, stdev=217.40, samples=5 00:35:42.741 iops : min= 196, max= 330, avg=240.00, stdev=54.35, samples=5 00:35:42.741 lat (usec) : 750=0.62%, 1000=9.94% 00:35:42.741 lat (msec) : 2=81.52%, 50=7.76% 00:35:42.741 cpu : usr=0.39%, sys=0.92%, ctx=646, majf=0, minf=2 00:35:42.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.741 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.741 issued rwts: total=644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:42.741 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1558636: Wed Nov 20 16:30:18 2024 00:35:42.741 read: IOPS=954, BW=3816KiB/s (3908kB/s)(9956KiB/2609msec) 00:35:42.741 slat (nsec): min=6603, max=63539, avg=26926.59, stdev=3247.43 00:35:42.741 clat (usec): min=398, max=1346, avg=1004.34, stdev=113.65 00:35:42.741 lat (usec): min=425, max=1373, avg=1031.26, stdev=113.99 00:35:42.741 clat percentiles (usec): 00:35:42.741 | 1.00th=[ 685], 5.00th=[ 807], 10.00th=[ 857], 20.00th=[ 922], 00:35:42.741 | 30.00th=[ 955], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1037], 00:35:42.741 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1172], 00:35:42.741 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1319], 99.95th=[ 1319], 00:35:42.741 | 99.99th=[ 1352] 00:35:42.741 bw ( KiB/s): min= 3752, max= 4064, per=38.91%, avg=3862.40, stdev=136.87, samples=5 00:35:42.741 iops : min= 938, max= 1016, avg=965.60, stdev=34.22, samples=5 00:35:42.741 lat (usec) : 500=0.20%, 750=1.81%, 1000=41.93% 00:35:42.741 lat (msec) : 2=56.02% 00:35:42.741 cpu : usr=1.92%, sys=3.57%, ctx=2491, majf=0, minf=2 00:35:42.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.741 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.741 issued rwts: total=2490,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:42.741 00:35:42.741 Run status group 0 (all jobs): 00:35:42.741 READ: bw=9926KiB/s (10.2MB/s), 914KiB/s-3816KiB/s (936kB/s-3908kB/s), io=30.7MiB (32.2MB), run=2609-3165msec 00:35:42.741 00:35:42.741 Disk stats (read/write): 00:35:42.741 nvme0n1: ios=2376/0, merge=0/0, ticks=2629/0, in_queue=2629, util=92.52% 00:35:42.741 nvme0n2: ios=2335/0, merge=0/0, ticks=2912/0, in_queue=2912, util=95.13% 00:35:42.741 nvme0n3: ios=611/0, merge=0/0, ticks=2480/0, in_queue=2480, util=96.03% 00:35:42.741 nvme0n4: ios=2489/0, merge=0/0, ticks=2307/0, in_queue=2307, util=96.46% 00:35:42.741 16:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:42.742 16:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:35:43.002 16:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:43.002 16:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:35:43.262 16:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:43.262 16:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:35:43.262 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:43.262 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:35:43.522 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:35:43.522 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1558441 00:35:43.522 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:35:43.522 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:43.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:43.522 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:43.522 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:35:43.522 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:43.522 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:43.522 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:43.522 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:43.781 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:35:43.781 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:35:43.781 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:35:43.781 nvmf hotplug test: fio failed as expected 00:35:43.781 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:43.781 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:35:43.781 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:35:43.781 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:35:43.781 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:35:43.781 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:35:43.782 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:43.782 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:35:43.782 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:43.782 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:35:43.782 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:43.782 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:43.782 rmmod nvme_tcp 00:35:43.782 rmmod nvme_fabrics 00:35:43.782 rmmod nvme_keyring 00:35:43.782 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1554721 ']' 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1554721 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1554721 ']' 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1554721 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1554721 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1554721' 00:35:44.041 killing process with pid 1554721 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1554721 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1554721 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:44.041 16:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:46.583 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:46.583 00:35:46.583 real 0m28.392s 00:35:46.583 user 2m22.834s 00:35:46.583 sys 0m12.050s 00:35:46.583 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:46.583 16:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:46.583 ************************************ 00:35:46.583 END TEST nvmf_fio_target 00:35:46.583 ************************************ 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:46.583 ************************************ 00:35:46.583 START TEST nvmf_bdevio 00:35:46.583 ************************************ 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:46.583 * Looking for test storage... 00:35:46.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:46.583 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:46.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.583 --rc genhtml_branch_coverage=1 00:35:46.583 --rc genhtml_function_coverage=1 00:35:46.583 --rc genhtml_legend=1 00:35:46.583 --rc geninfo_all_blocks=1 00:35:46.583 --rc geninfo_unexecuted_blocks=1 00:35:46.584 00:35:46.584 ' 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:46.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.584 --rc genhtml_branch_coverage=1 00:35:46.584 --rc genhtml_function_coverage=1 00:35:46.584 --rc genhtml_legend=1 00:35:46.584 --rc geninfo_all_blocks=1 00:35:46.584 --rc geninfo_unexecuted_blocks=1 00:35:46.584 00:35:46.584 ' 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:46.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.584 --rc genhtml_branch_coverage=1 00:35:46.584 --rc genhtml_function_coverage=1 00:35:46.584 --rc genhtml_legend=1 00:35:46.584 --rc geninfo_all_blocks=1 00:35:46.584 --rc geninfo_unexecuted_blocks=1 00:35:46.584 00:35:46.584 ' 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:46.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.584 --rc genhtml_branch_coverage=1 00:35:46.584 --rc genhtml_function_coverage=1 00:35:46.584 --rc genhtml_legend=1 00:35:46.584 --rc geninfo_all_blocks=1 00:35:46.584 --rc geninfo_unexecuted_blocks=1 00:35:46.584 00:35:46.584 ' 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:35:46.584 16:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:54.726 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:54.726 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:54.726 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:54.726 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:54.726 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:54.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:54.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:35:54.727 00:35:54.727 --- 10.0.0.2 ping statistics --- 00:35:54.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:54.727 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:54.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:54.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:35:54.727 00:35:54.727 --- 10.0.0.1 ping statistics --- 00:35:54.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:54.727 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1563655 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1563655 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1563655 ']' 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:54.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:54.727 16:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:54.727 [2024-11-20 16:30:29.838332] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:54.727 [2024-11-20 16:30:29.839461] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:35:54.727 [2024-11-20 16:30:29.839509] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:54.727 [2024-11-20 16:30:29.939771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:54.727 [2024-11-20 16:30:29.992709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:54.727 [2024-11-20 16:30:29.992765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:54.727 [2024-11-20 16:30:29.992773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:54.727 [2024-11-20 16:30:29.992780] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:54.727 [2024-11-20 16:30:29.992786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:54.727 [2024-11-20 16:30:29.994858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:54.727 [2024-11-20 16:30:29.995017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:54.727 [2024-11-20 16:30:29.995198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:54.727 [2024-11-20 16:30:29.995255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:54.727 [2024-11-20 16:30:30.082307] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:54.727 [2024-11-20 16:30:30.082780] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:54.727 [2024-11-20 16:30:30.083319] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:54.727 [2024-11-20 16:30:30.083828] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:54.727 [2024-11-20 16:30:30.083890] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:54.727 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:54.727 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:35:54.727 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:54.727 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:54.727 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:54.989 [2024-11-20 16:30:30.704278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:54.989 Malloc0 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:54.989 [2024-11-20 16:30:30.800352] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:54.989 { 00:35:54.989 "params": { 00:35:54.989 "name": "Nvme$subsystem", 00:35:54.989 "trtype": "$TEST_TRANSPORT", 00:35:54.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:54.989 "adrfam": "ipv4", 00:35:54.989 "trsvcid": "$NVMF_PORT", 00:35:54.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:54.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:54.989 "hdgst": ${hdgst:-false}, 00:35:54.989 "ddgst": ${ddgst:-false} 00:35:54.989 }, 00:35:54.989 "method": "bdev_nvme_attach_controller" 00:35:54.989 } 00:35:54.989 EOF 00:35:54.989 )") 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:54.989 16:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:54.989 "params": { 00:35:54.989 "name": "Nvme1", 00:35:54.989 "trtype": "tcp", 00:35:54.989 "traddr": "10.0.0.2", 00:35:54.989 "adrfam": "ipv4", 00:35:54.989 "trsvcid": "4420", 00:35:54.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:54.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:54.990 "hdgst": false, 00:35:54.990 "ddgst": false 00:35:54.990 }, 00:35:54.990 "method": "bdev_nvme_attach_controller" 00:35:54.990 }' 00:35:54.990 [2024-11-20 16:30:30.859104] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:35:54.990 [2024-11-20 16:30:30.859182] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1564006 ] 00:35:55.251 [2024-11-20 16:30:30.954168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:55.251 [2024-11-20 16:30:31.010593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:55.251 [2024-11-20 16:30:31.010762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:55.251 [2024-11-20 16:30:31.010762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:55.251 I/O targets: 00:35:55.251 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:55.251 00:35:55.251 00:35:55.251 CUnit - A unit testing framework for C - Version 2.1-3 00:35:55.251 http://cunit.sourceforge.net/ 00:35:55.251 00:35:55.251 00:35:55.251 Suite: bdevio tests on: Nvme1n1 00:35:55.511 Test: blockdev write read block ...passed 00:35:55.511 Test: blockdev write zeroes read block ...passed 00:35:55.511 Test: blockdev write zeroes read no split ...passed 00:35:55.511 Test: blockdev write zeroes read split ...passed 00:35:55.511 Test: blockdev write zeroes read split partial ...passed 00:35:55.511 Test: blockdev reset ...[2024-11-20 16:30:31.332575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:55.511 [2024-11-20 16:30:31.332687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x735970 (9): Bad file descriptor 00:35:55.512 [2024-11-20 16:30:31.386728] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:55.512 passed 00:35:55.512 Test: blockdev write read 8 blocks ...passed 00:35:55.512 Test: blockdev write read size > 128k ...passed 00:35:55.512 Test: blockdev write read invalid size ...passed 00:35:55.512 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:55.512 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:55.512 Test: blockdev write read max offset ...passed 00:35:55.773 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:55.773 Test: blockdev writev readv 8 blocks ...passed 00:35:55.773 Test: blockdev writev readv 30 x 1block ...passed 00:35:55.773 Test: blockdev writev readv block ...passed 00:35:55.773 Test: blockdev writev readv size > 128k ...passed 00:35:55.773 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:55.773 Test: blockdev comparev and writev ...[2024-11-20 16:30:31.571972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:55.773 [2024-11-20 16:30:31.572030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:55.773 [2024-11-20 16:30:31.572047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:55.773 [2024-11-20 16:30:31.572056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.773 [2024-11-20 16:30:31.572674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:55.773 [2024-11-20 16:30:31.572690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:55.773 [2024-11-20 16:30:31.572704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:55.773 [2024-11-20 16:30:31.572712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:55.773 [2024-11-20 16:30:31.573312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:55.773 [2024-11-20 16:30:31.573325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:55.773 [2024-11-20 16:30:31.573339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:55.773 [2024-11-20 16:30:31.573348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:55.773 [2024-11-20 16:30:31.573979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:55.773 [2024-11-20 16:30:31.573993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:55.773 [2024-11-20 16:30:31.574007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:55.773 [2024-11-20 16:30:31.574015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:55.773 passed 00:35:55.773 Test: blockdev nvme passthru rw ...passed 00:35:55.773 Test: blockdev nvme passthru vendor specific ...[2024-11-20 16:30:31.660046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:55.773 [2024-11-20 16:30:31.660064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:55.773 [2024-11-20 16:30:31.660472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:55.773 [2024-11-20 16:30:31.660487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:55.773 [2024-11-20 16:30:31.660845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:55.773 [2024-11-20 16:30:31.660859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:55.773 [2024-11-20 16:30:31.661235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:55.773 [2024-11-20 16:30:31.661248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:55.773 passed 00:35:55.773 Test: blockdev nvme admin passthru ...passed 00:35:56.035 Test: blockdev copy ...passed 00:35:56.035 00:35:56.035 Run Summary: Type Total Ran Passed Failed Inactive 00:35:56.035 suites 1 1 n/a 0 0 00:35:56.035 tests 23 23 23 0 0 00:35:56.035 asserts 152 152 152 0 n/a 00:35:56.035 00:35:56.035 Elapsed time = 1.097 seconds 00:35:56.035 16:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:56.035 16:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.035 16:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:56.035 16:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.035 16:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:56.035 16:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:56.035 16:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:56.035 16:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:56.035 16:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:56.035 16:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:56.035 16:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:56.035 16:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:56.035 rmmod nvme_tcp 00:35:56.035 rmmod nvme_fabrics 00:35:56.035 rmmod nvme_keyring 00:35:56.035 16:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:56.035 16:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:56.035 16:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:56.035 16:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1563655 ']' 00:35:56.035 16:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1563655 00:35:56.035 16:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1563655 ']' 00:35:56.035 16:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1563655 00:35:56.035 16:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:35:56.035 16:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:56.035 16:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1563655 00:35:56.296 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:35:56.296 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:35:56.296 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1563655' 00:35:56.296 killing process with pid 1563655 00:35:56.296 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1563655 00:35:56.296 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1563655 00:35:56.296 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:56.296 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:56.296 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:56.296 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:56.296 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:56.296 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:56.296 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:56.296 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:56.296 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:56.296 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:56.296 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:56.296 16:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:58.844 16:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:58.844 00:35:58.844 real 0m12.214s 00:35:58.844 user 0m9.277s 00:35:58.844 sys 0m6.452s 00:35:58.844 16:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:58.844 16:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:58.844 ************************************ 00:35:58.844 END TEST nvmf_bdevio 00:35:58.844 ************************************ 00:35:58.844 16:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:58.844 00:35:58.844 real 5m1.179s 00:35:58.844 user 10m25.320s 00:35:58.844 sys 2m6.317s 00:35:58.844 16:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:58.844 16:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:58.844 ************************************ 00:35:58.844 END TEST nvmf_target_core_interrupt_mode 00:35:58.844 ************************************ 00:35:58.844 16:30:34 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:58.844 16:30:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:58.844 16:30:34 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:58.844 16:30:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:58.844 ************************************ 00:35:58.844 START TEST nvmf_interrupt 00:35:58.844 ************************************ 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:58.844 * Looking for test storage... 00:35:58.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:58.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:58.844 --rc genhtml_branch_coverage=1 00:35:58.844 --rc genhtml_function_coverage=1 00:35:58.844 --rc genhtml_legend=1 00:35:58.844 --rc geninfo_all_blocks=1 00:35:58.844 --rc geninfo_unexecuted_blocks=1 00:35:58.844 00:35:58.844 ' 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:58.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:58.844 --rc genhtml_branch_coverage=1 00:35:58.844 --rc genhtml_function_coverage=1 00:35:58.844 --rc genhtml_legend=1 00:35:58.844 --rc geninfo_all_blocks=1 00:35:58.844 --rc geninfo_unexecuted_blocks=1 00:35:58.844 00:35:58.844 ' 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:58.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:58.844 --rc genhtml_branch_coverage=1 00:35:58.844 --rc genhtml_function_coverage=1 00:35:58.844 --rc genhtml_legend=1 00:35:58.844 --rc geninfo_all_blocks=1 00:35:58.844 --rc geninfo_unexecuted_blocks=1 00:35:58.844 00:35:58.844 ' 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:58.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:58.844 --rc genhtml_branch_coverage=1 00:35:58.844 --rc genhtml_function_coverage=1 00:35:58.844 --rc genhtml_legend=1 00:35:58.844 --rc geninfo_all_blocks=1 00:35:58.844 --rc geninfo_unexecuted_blocks=1 00:35:58.844 00:35:58.844 ' 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:58.844 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:58.845 16:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:06.990 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:06.990 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:06.990 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:06.991 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:06.991 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:06.991 16:30:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:06.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:06.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:36:06.991 00:36:06.991 --- 10.0.0.2 ping statistics --- 00:36:06.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:06.991 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:06.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:06.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:36:06.991 00:36:06.991 --- 10.0.0.1 ping statistics --- 00:36:06.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:06.991 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1568348 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1568348 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1568348 ']' 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:06.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:06.991 16:30:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:06.991 [2024-11-20 16:30:42.241001] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:06.991 [2024-11-20 16:30:42.242282] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:36:06.991 [2024-11-20 16:30:42.242336] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:06.991 [2024-11-20 16:30:42.341098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:06.991 [2024-11-20 16:30:42.391846] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:06.991 [2024-11-20 16:30:42.391896] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:06.991 [2024-11-20 16:30:42.391905] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:06.991 [2024-11-20 16:30:42.391912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:06.991 [2024-11-20 16:30:42.391919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:06.991 [2024-11-20 16:30:42.393709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:06.991 [2024-11-20 16:30:42.393713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:06.991 [2024-11-20 16:30:42.472703] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:06.991 [2024-11-20 16:30:42.473335] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:06.991 [2024-11-20 16:30:42.473632] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:07.253 16:30:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:07.253 16:30:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:36:07.253 16:30:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:07.253 16:30:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:07.253 16:30:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:07.253 16:30:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:07.253 16:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:36:07.253 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:36:07.253 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:36:07.253 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:36:07.253 5000+0 records in 00:36:07.253 5000+0 records out 00:36:07.253 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0187527 s, 546 MB/s 00:36:07.253 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:36:07.253 16:30:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.253 16:30:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:07.253 AIO0 00:36:07.253 16:30:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.253 16:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:36:07.253 16:30:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.253 16:30:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:07.253 [2024-11-20 16:30:43.166740] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:07.253 16:30:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.253 16:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:07.253 16:30:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.253 16:30:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:07.517 16:30:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.517 16:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:36:07.517 16:30:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.517 16:30:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:07.517 16:30:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.517 16:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:07.517 16:30:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:07.518 [2024-11-20 16:30:43.211236] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1568348 0 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1568348 0 idle 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1568348 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1568348 -w 256 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1568348 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.32 reactor_0' 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1568348 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.32 reactor_0 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1568348 1 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1568348 1 idle 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1568348 00:36:07.518 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:07.519 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:07.519 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:07.519 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:07.519 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:07.519 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:07.519 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:07.519 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:07.519 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:07.519 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1568348 -w 256 00:36:07.519 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1568352 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1568352 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1568704 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1568348 0 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1568348 0 busy 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1568348 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1568348 -w 256 00:36:07.784 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1568348 root 20 0 128.2g 43776 32256 R 53.3 0.0 0:00.41 reactor_0' 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1568348 root 20 0 128.2g 43776 32256 R 53.3 0.0 0:00.41 reactor_0 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=53.3 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=53 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1568348 1 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1568348 1 busy 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1568348 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1568348 -w 256 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1568352 root 20 0 128.2g 43776 32256 R 93.3 0.0 0:00.22 reactor_1' 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1568352 root 20 0 128.2g 43776 32256 R 93.3 0.0 0:00.22 reactor_1 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:08.045 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:08.306 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:36:08.306 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:36:08.306 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:08.306 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:08.306 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:08.306 16:30:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:08.306 16:30:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1568704 00:36:18.303 Initializing NVMe Controllers 00:36:18.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:18.303 Controller IO queue size 256, less than required. 00:36:18.303 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:18.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:18.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:18.303 Initialization complete. Launching workers. 00:36:18.303 ======================================================== 00:36:18.303 Latency(us) 00:36:18.303 Device Information : IOPS MiB/s Average min max 00:36:18.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18893.50 73.80 13554.05 3904.73 33113.15 00:36:18.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19707.60 76.98 12991.48 8114.11 28903.89 00:36:18.303 ======================================================== 00:36:18.303 Total : 38601.09 150.79 13266.83 3904.73 33113.15 00:36:18.303 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1568348 0 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1568348 0 idle 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1568348 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1568348 -w 256 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1568348 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:20.32 reactor_0' 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1568348 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:20.32 reactor_0 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1568348 1 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1568348 1 idle 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1568348 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1568348 -w 256 00:36:18.303 16:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:18.303 16:30:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1568352 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:10.01 reactor_1' 00:36:18.303 16:30:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1568352 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:10.01 reactor_1 00:36:18.303 16:30:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:18.303 16:30:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:18.303 16:30:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:18.303 16:30:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:18.303 16:30:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:18.303 16:30:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:18.303 16:30:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:18.303 16:30:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:18.303 16:30:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:19.244 16:30:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:36:19.244 16:30:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:36:19.244 16:30:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:19.244 16:30:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:36:19.244 16:30:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:36:21.156 16:30:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:21.156 16:30:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:21.156 16:30:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:21.156 16:30:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:36:21.156 16:30:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:21.156 16:30:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:36:21.156 16:30:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:21.156 16:30:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1568348 0 00:36:21.156 16:30:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1568348 0 idle 00:36:21.156 16:30:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1568348 00:36:21.156 16:30:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:21.156 16:30:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:21.156 16:30:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:21.156 16:30:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:21.156 16:30:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:21.156 16:30:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:21.156 16:30:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:21.156 16:30:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:21.156 16:30:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:21.156 16:30:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1568348 -w 256 00:36:21.156 16:30:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1568348 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:20.69 reactor_0' 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1568348 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:20.69 reactor_0 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1568348 1 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1568348 1 idle 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1568348 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1568348 -w 256 00:36:21.156 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:21.417 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1568352 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:10.16 reactor_1' 00:36:21.417 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1568352 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:10.16 reactor_1 00:36:21.417 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:21.417 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:21.417 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:21.417 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:21.417 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:21.417 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:21.417 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:21.417 16:30:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:21.417 16:30:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:21.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:21.678 rmmod nvme_tcp 00:36:21.678 rmmod nvme_fabrics 00:36:21.678 rmmod nvme_keyring 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1568348 ']' 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1568348 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1568348 ']' 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1568348 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1568348 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:21.678 16:30:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1568348' 00:36:21.678 killing process with pid 1568348 00:36:21.939 16:30:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1568348 00:36:21.939 16:30:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1568348 00:36:21.939 16:30:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:21.939 16:30:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:21.939 16:30:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:21.939 16:30:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:36:21.939 16:30:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:36:21.939 16:30:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:21.939 16:30:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:36:21.939 16:30:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:21.939 16:30:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:21.939 16:30:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:21.939 16:30:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:21.939 16:30:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.483 16:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:24.483 00:36:24.483 real 0m25.450s 00:36:24.483 user 0m40.307s 00:36:24.483 sys 0m9.821s 00:36:24.483 16:30:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:24.483 16:30:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:24.483 ************************************ 00:36:24.483 END TEST nvmf_interrupt 00:36:24.483 ************************************ 00:36:24.483 00:36:24.483 real 30m9.509s 00:36:24.483 user 61m45.825s 00:36:24.483 sys 10m21.543s 00:36:24.483 16:30:59 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:24.483 16:30:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:24.483 ************************************ 00:36:24.483 END TEST nvmf_tcp 00:36:24.483 ************************************ 00:36:24.483 16:30:59 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:36:24.483 16:30:59 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:24.484 16:30:59 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:24.484 16:30:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:24.484 16:30:59 -- common/autotest_common.sh@10 -- # set +x 00:36:24.484 ************************************ 00:36:24.484 START TEST spdkcli_nvmf_tcp 00:36:24.484 ************************************ 00:36:24.484 16:30:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:24.484 * Looking for test storage... 00:36:24.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:24.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.484 --rc genhtml_branch_coverage=1 00:36:24.484 --rc genhtml_function_coverage=1 00:36:24.484 --rc genhtml_legend=1 00:36:24.484 --rc geninfo_all_blocks=1 00:36:24.484 --rc geninfo_unexecuted_blocks=1 00:36:24.484 00:36:24.484 ' 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:24.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.484 --rc genhtml_branch_coverage=1 00:36:24.484 --rc genhtml_function_coverage=1 00:36:24.484 --rc genhtml_legend=1 00:36:24.484 --rc geninfo_all_blocks=1 00:36:24.484 --rc geninfo_unexecuted_blocks=1 00:36:24.484 00:36:24.484 ' 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:24.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.484 --rc genhtml_branch_coverage=1 00:36:24.484 --rc genhtml_function_coverage=1 00:36:24.484 --rc genhtml_legend=1 00:36:24.484 --rc geninfo_all_blocks=1 00:36:24.484 --rc geninfo_unexecuted_blocks=1 00:36:24.484 00:36:24.484 ' 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:24.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.484 --rc genhtml_branch_coverage=1 00:36:24.484 --rc genhtml_function_coverage=1 00:36:24.484 --rc genhtml_legend=1 00:36:24.484 --rc geninfo_all_blocks=1 00:36:24.484 --rc geninfo_unexecuted_blocks=1 00:36:24.484 00:36:24.484 ' 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:24.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1571893 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1571893 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1571893 ']' 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:24.484 16:31:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:24.485 16:31:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:24.485 16:31:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:24.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:24.485 16:31:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:24.485 16:31:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:24.485 [2024-11-20 16:31:00.272007] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:36:24.485 [2024-11-20 16:31:00.272079] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1571893 ] 00:36:24.485 [2024-11-20 16:31:00.363087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:24.744 [2024-11-20 16:31:00.416754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:24.745 [2024-11-20 16:31:00.416759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:25.316 16:31:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:25.316 16:31:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:36:25.316 16:31:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:25.316 16:31:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:25.316 16:31:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:25.316 16:31:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:25.316 16:31:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:25.316 16:31:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:25.316 16:31:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:25.316 16:31:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:25.316 16:31:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:25.316 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:25.316 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:25.316 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:25.316 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:25.316 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:25.316 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:25.316 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:25.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:25.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:25.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:25.316 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:25.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:25.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:25.316 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:25.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:25.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:25.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:25.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:25.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:25.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:25.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:25.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:25.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:25.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:25.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:25.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:25.316 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:25.316 ' 00:36:28.612 [2024-11-20 16:31:03.819257] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:29.553 [2024-11-20 16:31:05.179422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:32.102 [2024-11-20 16:31:07.710445] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:34.122 [2024-11-20 16:31:09.932710] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:36.049 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:36.049 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:36.049 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:36.049 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:36.049 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:36.049 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:36.049 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:36.049 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:36.049 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:36.049 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:36.049 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:36.049 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:36.049 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:36.049 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:36.049 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:36.049 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:36.049 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:36.049 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:36.049 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:36.049 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:36.049 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:36.049 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:36.049 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:36.049 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:36.049 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:36.049 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:36.049 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:36.049 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:36.049 16:31:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:36.049 16:31:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:36.049 16:31:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:36.049 16:31:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:36.049 16:31:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:36.049 16:31:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:36.049 16:31:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:36.049 16:31:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:36.311 16:31:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:36.311 16:31:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:36.311 16:31:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:36.311 16:31:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:36.311 16:31:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:36.311 16:31:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:36.311 16:31:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:36.311 16:31:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:36.311 16:31:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:36.311 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:36.311 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:36.311 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:36.311 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:36.311 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:36.311 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:36.311 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:36.311 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:36.311 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:36.311 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:36.311 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:36.311 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:36.311 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:36.311 ' 00:36:42.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:42.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:42.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:42.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:42.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:42.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:42.896 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:42.896 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:42.896 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:42.896 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:42.896 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:42.896 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:42.896 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:42.896 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:42.896 16:31:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:42.896 16:31:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:42.896 16:31:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:42.896 16:31:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1571893 00:36:42.896 16:31:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1571893 ']' 00:36:42.896 16:31:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1571893 00:36:42.896 16:31:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:36:42.896 16:31:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:42.896 16:31:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1571893 00:36:42.896 16:31:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:42.896 16:31:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:42.896 16:31:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1571893' 00:36:42.896 killing process with pid 1571893 00:36:42.896 16:31:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1571893 00:36:42.896 16:31:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1571893 00:36:42.896 16:31:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:42.896 16:31:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:42.896 16:31:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1571893 ']' 00:36:42.896 16:31:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1571893 00:36:42.896 16:31:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1571893 ']' 00:36:42.896 16:31:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1571893 00:36:42.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1571893) - No such process 00:36:42.896 16:31:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1571893 is not found' 00:36:42.896 Process with pid 1571893 is not found 00:36:42.896 16:31:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:42.896 16:31:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:42.897 16:31:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:42.897 00:36:42.897 real 0m18.128s 00:36:42.897 user 0m40.281s 00:36:42.897 sys 0m0.887s 00:36:42.897 16:31:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:42.897 16:31:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:42.897 ************************************ 00:36:42.897 END TEST spdkcli_nvmf_tcp 00:36:42.897 ************************************ 00:36:42.897 16:31:18 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:42.897 16:31:18 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:42.897 16:31:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:42.897 16:31:18 -- common/autotest_common.sh@10 -- # set +x 00:36:42.897 ************************************ 00:36:42.897 START TEST nvmf_identify_passthru 00:36:42.897 ************************************ 00:36:42.897 16:31:18 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:42.897 * Looking for test storage... 00:36:42.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:42.897 16:31:18 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:42.897 16:31:18 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:36:42.897 16:31:18 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:42.897 16:31:18 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:36:42.897 16:31:18 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:42.897 16:31:18 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:42.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.897 --rc genhtml_branch_coverage=1 00:36:42.897 --rc genhtml_function_coverage=1 00:36:42.897 --rc genhtml_legend=1 00:36:42.897 --rc geninfo_all_blocks=1 00:36:42.897 --rc geninfo_unexecuted_blocks=1 00:36:42.897 00:36:42.897 ' 00:36:42.897 16:31:18 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:42.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.897 --rc genhtml_branch_coverage=1 00:36:42.897 --rc genhtml_function_coverage=1 00:36:42.897 --rc genhtml_legend=1 00:36:42.897 --rc geninfo_all_blocks=1 00:36:42.897 --rc geninfo_unexecuted_blocks=1 00:36:42.897 00:36:42.897 ' 00:36:42.897 16:31:18 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:42.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.897 --rc genhtml_branch_coverage=1 00:36:42.897 --rc genhtml_function_coverage=1 00:36:42.897 --rc genhtml_legend=1 00:36:42.897 --rc geninfo_all_blocks=1 00:36:42.897 --rc geninfo_unexecuted_blocks=1 00:36:42.897 00:36:42.897 ' 00:36:42.897 16:31:18 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:42.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.897 --rc genhtml_branch_coverage=1 00:36:42.897 --rc genhtml_function_coverage=1 00:36:42.897 --rc genhtml_legend=1 00:36:42.897 --rc geninfo_all_blocks=1 00:36:42.897 --rc geninfo_unexecuted_blocks=1 00:36:42.897 00:36:42.897 ' 00:36:42.897 16:31:18 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:42.897 16:31:18 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.897 16:31:18 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.897 16:31:18 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.897 16:31:18 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:42.897 16:31:18 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:42.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:42.897 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:42.897 16:31:18 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:42.897 16:31:18 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:42.898 16:31:18 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.898 16:31:18 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.898 16:31:18 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.898 16:31:18 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:42.898 16:31:18 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.898 16:31:18 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:42.898 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:42.898 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:42.898 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:42.898 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:42.898 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:42.898 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:42.898 16:31:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:42.898 16:31:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:42.898 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:42.898 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:42.898 16:31:18 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:36:42.898 16:31:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:51.058 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:51.058 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:51.058 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:51.058 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:51.058 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:51.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:51.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:36:51.059 00:36:51.059 --- 10.0.0.2 ping statistics --- 00:36:51.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:51.059 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:51.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:51.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:36:51.059 00:36:51.059 --- 10.0.0.1 ping statistics --- 00:36:51.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:51.059 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:51.059 16:31:25 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:51.059 16:31:25 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:51.059 16:31:25 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:51.059 16:31:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:51.059 16:31:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:51.059 16:31:25 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:36:51.059 16:31:25 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:36:51.059 16:31:25 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:36:51.059 16:31:25 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:36:51.059 16:31:25 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:36:51.059 16:31:25 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:36:51.059 16:31:25 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:51.059 16:31:25 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:51.059 16:31:25 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:36:51.059 16:31:25 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:36:51.059 16:31:25 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:36:51.059 16:31:25 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:36:51.059 16:31:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:36:51.059 16:31:25 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:36:51.059 16:31:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:51.059 16:31:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:51.059 16:31:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:51.059 16:31:26 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:36:51.059 16:31:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:51.059 16:31:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:51.059 16:31:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:51.059 16:31:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:36:51.059 16:31:26 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:51.059 16:31:26 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:51.059 16:31:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:51.321 16:31:26 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:51.321 16:31:26 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:51.321 16:31:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:51.321 16:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1579299 00:36:51.321 16:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:51.321 16:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:51.321 16:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1579299 00:36:51.321 16:31:27 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1579299 ']' 00:36:51.321 16:31:27 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:51.321 16:31:27 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:51.321 16:31:27 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:51.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:51.321 16:31:27 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:51.321 16:31:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:51.321 [2024-11-20 16:31:27.062290] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:36:51.321 [2024-11-20 16:31:27.062362] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:51.321 [2024-11-20 16:31:27.164726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:51.321 [2024-11-20 16:31:27.210830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:51.321 [2024-11-20 16:31:27.210866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:51.321 [2024-11-20 16:31:27.210873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:51.321 [2024-11-20 16:31:27.210880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:51.321 [2024-11-20 16:31:27.210888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:51.321 [2024-11-20 16:31:27.212349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:51.321 [2024-11-20 16:31:27.212472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:51.321 [2024-11-20 16:31:27.212620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:51.321 [2024-11-20 16:31:27.212620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:52.264 16:31:27 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:52.264 16:31:27 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:36:52.264 16:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:52.264 16:31:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.264 16:31:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:52.264 INFO: Log level set to 20 00:36:52.264 INFO: Requests: 00:36:52.264 { 00:36:52.264 "jsonrpc": "2.0", 00:36:52.264 "method": "nvmf_set_config", 00:36:52.264 "id": 1, 00:36:52.264 "params": { 00:36:52.264 "admin_cmd_passthru": { 00:36:52.264 "identify_ctrlr": true 00:36:52.264 } 00:36:52.264 } 00:36:52.264 } 00:36:52.264 00:36:52.264 INFO: response: 00:36:52.264 { 00:36:52.264 "jsonrpc": "2.0", 00:36:52.264 "id": 1, 00:36:52.264 "result": true 00:36:52.264 } 00:36:52.264 00:36:52.264 16:31:27 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.264 16:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:52.264 16:31:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.264 16:31:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:52.264 INFO: Setting log level to 20 00:36:52.264 INFO: Setting log level to 20 00:36:52.264 INFO: Log level set to 20 00:36:52.264 INFO: Log level set to 20 00:36:52.264 INFO: Requests: 00:36:52.264 { 00:36:52.264 "jsonrpc": "2.0", 00:36:52.264 "method": "framework_start_init", 00:36:52.264 "id": 1 00:36:52.264 } 00:36:52.264 00:36:52.264 INFO: Requests: 00:36:52.264 { 00:36:52.264 "jsonrpc": "2.0", 00:36:52.264 "method": "framework_start_init", 00:36:52.264 "id": 1 00:36:52.264 } 00:36:52.264 00:36:52.264 [2024-11-20 16:31:27.978580] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:52.264 INFO: response: 00:36:52.264 { 00:36:52.264 "jsonrpc": "2.0", 00:36:52.264 "id": 1, 00:36:52.264 "result": true 00:36:52.264 } 00:36:52.265 00:36:52.265 INFO: response: 00:36:52.265 { 00:36:52.265 "jsonrpc": "2.0", 00:36:52.265 "id": 1, 00:36:52.265 "result": true 00:36:52.265 } 00:36:52.265 00:36:52.265 16:31:27 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.265 16:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:52.265 16:31:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.265 16:31:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:52.265 INFO: Setting log level to 40 00:36:52.265 INFO: Setting log level to 40 00:36:52.265 INFO: Setting log level to 40 00:36:52.265 [2024-11-20 16:31:27.992129] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:52.265 16:31:27 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.265 16:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:52.265 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:52.265 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:52.265 16:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:36:52.265 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.265 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:52.526 Nvme0n1 00:36:52.526 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.526 16:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:52.526 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.526 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:52.526 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.526 16:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:52.526 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.526 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:52.526 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.526 16:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:52.526 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.526 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:52.526 [2024-11-20 16:31:28.397518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:52.526 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.526 16:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:52.526 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.526 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:52.526 [ 00:36:52.526 { 00:36:52.526 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:52.526 "subtype": "Discovery", 00:36:52.526 "listen_addresses": [], 00:36:52.526 "allow_any_host": true, 00:36:52.526 "hosts": [] 00:36:52.526 }, 00:36:52.526 { 00:36:52.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:52.526 "subtype": "NVMe", 00:36:52.526 "listen_addresses": [ 00:36:52.526 { 00:36:52.526 "trtype": "TCP", 00:36:52.526 "adrfam": "IPv4", 00:36:52.526 "traddr": "10.0.0.2", 00:36:52.526 "trsvcid": "4420" 00:36:52.526 } 00:36:52.526 ], 00:36:52.526 "allow_any_host": true, 00:36:52.526 "hosts": [], 00:36:52.526 "serial_number": "SPDK00000000000001", 00:36:52.526 "model_number": "SPDK bdev Controller", 00:36:52.526 "max_namespaces": 1, 00:36:52.526 "min_cntlid": 1, 00:36:52.526 "max_cntlid": 65519, 00:36:52.526 "namespaces": [ 00:36:52.526 { 00:36:52.526 "nsid": 1, 00:36:52.526 "bdev_name": "Nvme0n1", 00:36:52.526 "name": "Nvme0n1", 00:36:52.526 "nguid": "36344730526054870025384500000044", 00:36:52.526 "uuid": "36344730-5260-5487-0025-384500000044" 00:36:52.526 } 00:36:52.526 ] 00:36:52.526 } 00:36:52.526 ] 00:36:52.526 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.526 16:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:52.526 16:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:52.526 16:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:52.787 16:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:36:52.787 16:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:52.787 16:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:52.787 16:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:53.049 16:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:36:53.049 16:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:36:53.049 16:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:36:53.049 16:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:53.049 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.049 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:53.049 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.049 16:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:53.049 16:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:53.049 16:31:28 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:53.049 16:31:28 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:53.049 16:31:28 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:53.049 16:31:28 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:53.049 16:31:28 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:53.049 16:31:28 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:53.049 rmmod nvme_tcp 00:36:53.049 rmmod nvme_fabrics 00:36:53.049 rmmod nvme_keyring 00:36:53.049 16:31:28 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:53.049 16:31:28 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:53.049 16:31:28 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:53.049 16:31:28 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1579299 ']' 00:36:53.049 16:31:28 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1579299 00:36:53.049 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1579299 ']' 00:36:53.049 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1579299 00:36:53.049 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:36:53.049 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:53.049 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1579299 00:36:53.049 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:53.049 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:53.049 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1579299' 00:36:53.049 killing process with pid 1579299 00:36:53.049 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1579299 00:36:53.049 16:31:28 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1579299 00:36:53.310 16:31:29 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:53.310 16:31:29 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:53.310 16:31:29 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:53.310 16:31:29 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:53.310 16:31:29 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:36:53.310 16:31:29 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:53.310 16:31:29 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:36:53.310 16:31:29 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:53.310 16:31:29 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:53.310 16:31:29 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:53.310 16:31:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:53.310 16:31:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:55.853 16:31:31 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:55.853 00:36:55.853 real 0m13.121s 00:36:55.853 user 0m10.281s 00:36:55.853 sys 0m6.693s 00:36:55.853 16:31:31 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:55.853 16:31:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:55.853 ************************************ 00:36:55.853 END TEST nvmf_identify_passthru 00:36:55.853 ************************************ 00:36:55.853 16:31:31 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:55.854 16:31:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:55.854 16:31:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:55.854 16:31:31 -- common/autotest_common.sh@10 -- # set +x 00:36:55.854 ************************************ 00:36:55.854 START TEST nvmf_dif 00:36:55.854 ************************************ 00:36:55.854 16:31:31 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:55.854 * Looking for test storage... 00:36:55.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:55.854 16:31:31 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:55.854 16:31:31 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:36:55.854 16:31:31 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:55.854 16:31:31 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:55.854 16:31:31 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:55.854 16:31:31 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:55.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.854 --rc genhtml_branch_coverage=1 00:36:55.854 --rc genhtml_function_coverage=1 00:36:55.854 --rc genhtml_legend=1 00:36:55.854 --rc geninfo_all_blocks=1 00:36:55.854 --rc geninfo_unexecuted_blocks=1 00:36:55.854 00:36:55.854 ' 00:36:55.854 16:31:31 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:55.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.854 --rc genhtml_branch_coverage=1 00:36:55.854 --rc genhtml_function_coverage=1 00:36:55.854 --rc genhtml_legend=1 00:36:55.854 --rc geninfo_all_blocks=1 00:36:55.854 --rc geninfo_unexecuted_blocks=1 00:36:55.854 00:36:55.854 ' 00:36:55.854 16:31:31 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:55.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.854 --rc genhtml_branch_coverage=1 00:36:55.854 --rc genhtml_function_coverage=1 00:36:55.854 --rc genhtml_legend=1 00:36:55.854 --rc geninfo_all_blocks=1 00:36:55.854 --rc geninfo_unexecuted_blocks=1 00:36:55.854 00:36:55.854 ' 00:36:55.854 16:31:31 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:55.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.854 --rc genhtml_branch_coverage=1 00:36:55.854 --rc genhtml_function_coverage=1 00:36:55.854 --rc genhtml_legend=1 00:36:55.854 --rc geninfo_all_blocks=1 00:36:55.854 --rc geninfo_unexecuted_blocks=1 00:36:55.854 00:36:55.854 ' 00:36:55.854 16:31:31 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:55.854 16:31:31 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:55.854 16:31:31 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.854 16:31:31 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.854 16:31:31 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.854 16:31:31 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:55.854 16:31:31 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:55.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:55.854 16:31:31 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:55.854 16:31:31 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:55.854 16:31:31 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:55.854 16:31:31 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:55.854 16:31:31 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:55.854 16:31:31 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:55.854 16:31:31 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:55.854 16:31:31 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:55.854 16:31:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:03.999 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:03.999 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:03.999 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:03.999 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:03.999 16:31:38 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:04.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:04.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:37:04.000 00:37:04.000 --- 10.0.0.2 ping statistics --- 00:37:04.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:04.000 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:04.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:04.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:37:04.000 00:37:04.000 --- 10.0.0.1 ping statistics --- 00:37:04.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:04.000 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:04.000 16:31:38 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:06.543 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:06.543 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:06.543 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:06.543 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:06.543 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:06.543 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:06.543 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:06.543 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:06.543 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:06.543 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:37:06.543 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:06.543 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:06.543 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:06.543 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:06.543 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:06.543 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:06.543 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:06.804 16:31:42 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:06.804 16:31:42 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:06.804 16:31:42 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:06.804 16:31:42 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:06.804 16:31:42 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:06.804 16:31:42 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:06.804 16:31:42 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:37:06.804 16:31:42 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:37:06.804 16:31:42 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:06.804 16:31:42 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:06.804 16:31:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:06.804 16:31:42 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1585179 00:37:06.804 16:31:42 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1585179 00:37:06.804 16:31:42 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:37:06.804 16:31:42 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1585179 ']' 00:37:06.804 16:31:42 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:06.804 16:31:42 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:06.804 16:31:42 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:06.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:06.804 16:31:42 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:06.804 16:31:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:07.066 [2024-11-20 16:31:42.754044] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:37:07.066 [2024-11-20 16:31:42.754107] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:07.066 [2024-11-20 16:31:42.853128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.066 [2024-11-20 16:31:42.904934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:07.066 [2024-11-20 16:31:42.904984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:07.066 [2024-11-20 16:31:42.904993] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:07.066 [2024-11-20 16:31:42.905000] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:07.066 [2024-11-20 16:31:42.905007] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:07.066 [2024-11-20 16:31:42.905813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:07.636 16:31:43 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:07.636 16:31:43 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:37:07.636 16:31:43 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:07.636 16:31:43 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:07.636 16:31:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:07.896 16:31:43 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:07.896 16:31:43 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:37:07.896 16:31:43 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:37:07.896 16:31:43 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.896 16:31:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:07.896 [2024-11-20 16:31:43.600277] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:07.896 16:31:43 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.896 16:31:43 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:37:07.896 16:31:43 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:07.896 16:31:43 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:07.897 16:31:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:07.897 ************************************ 00:37:07.897 START TEST fio_dif_1_default 00:37:07.897 ************************************ 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:07.897 bdev_null0 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:07.897 [2024-11-20 16:31:43.684619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:07.897 { 00:37:07.897 "params": { 00:37:07.897 "name": "Nvme$subsystem", 00:37:07.897 "trtype": "$TEST_TRANSPORT", 00:37:07.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:07.897 "adrfam": "ipv4", 00:37:07.897 "trsvcid": "$NVMF_PORT", 00:37:07.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:07.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:07.897 "hdgst": ${hdgst:-false}, 00:37:07.897 "ddgst": ${ddgst:-false} 00:37:07.897 }, 00:37:07.897 "method": "bdev_nvme_attach_controller" 00:37:07.897 } 00:37:07.897 EOF 00:37:07.897 )") 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:07.897 "params": { 00:37:07.897 "name": "Nvme0", 00:37:07.897 "trtype": "tcp", 00:37:07.897 "traddr": "10.0.0.2", 00:37:07.897 "adrfam": "ipv4", 00:37:07.897 "trsvcid": "4420", 00:37:07.897 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:07.897 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:07.897 "hdgst": false, 00:37:07.897 "ddgst": false 00:37:07.897 }, 00:37:07.897 "method": "bdev_nvme_attach_controller" 00:37:07.897 }' 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:07.897 16:31:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:08.486 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:08.486 fio-3.35 00:37:08.486 Starting 1 thread 00:37:20.717 00:37:20.717 filename0: (groupid=0, jobs=1): err= 0: pid=1585729: Wed Nov 20 16:31:54 2024 00:37:20.717 read: IOPS=205, BW=821KiB/s (840kB/s)(8224KiB/10022msec) 00:37:20.717 slat (nsec): min=5446, max=72577, avg=6363.03, stdev=2081.19 00:37:20.717 clat (usec): min=372, max=41810, avg=19480.36, stdev=20249.03 00:37:20.717 lat (usec): min=378, max=41848, avg=19486.72, stdev=20248.87 00:37:20.717 clat percentiles (usec): 00:37:20.717 | 1.00th=[ 498], 5.00th=[ 627], 10.00th=[ 644], 20.00th=[ 652], 00:37:20.717 | 30.00th=[ 668], 40.00th=[ 742], 50.00th=[ 783], 60.00th=[41157], 00:37:20.717 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:20.717 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:37:20.717 | 99.99th=[41681] 00:37:20.717 bw ( KiB/s): min= 704, max= 1472, per=99.93%, avg=820.80, stdev=180.21, samples=20 00:37:20.717 iops : min= 176, max= 368, avg=205.20, stdev=45.05, samples=20 00:37:20.717 lat (usec) : 500=1.31%, 750=40.13%, 1000=12.26% 00:37:20.717 lat (msec) : 50=46.30% 00:37:20.717 cpu : usr=93.43%, sys=6.30%, ctx=49, majf=0, minf=252 00:37:20.717 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:20.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:20.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:20.717 issued rwts: total=2056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:20.717 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:20.717 00:37:20.717 Run status group 0 (all jobs): 00:37:20.717 READ: bw=821KiB/s (840kB/s), 821KiB/s-821KiB/s (840kB/s-840kB/s), io=8224KiB (8421kB), run=10022-10022msec 00:37:20.717 16:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:20.717 16:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:20.717 16:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:20.717 16:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:20.717 16:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:20.717 16:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:20.717 16:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.717 16:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:20.717 16:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.717 16:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:20.717 16:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.717 16:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:20.717 16:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.717 00:37:20.717 real 0m11.338s 00:37:20.717 user 0m28.462s 00:37:20.717 sys 0m0.988s 00:37:20.717 16:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:20.717 16:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:20.717 ************************************ 00:37:20.717 END TEST fio_dif_1_default 00:37:20.717 ************************************ 00:37:20.717 16:31:55 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:20.717 16:31:55 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:20.717 16:31:55 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:20.717 16:31:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:20.717 ************************************ 00:37:20.717 START TEST fio_dif_1_multi_subsystems 00:37:20.717 ************************************ 00:37:20.717 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:37:20.717 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:20.717 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:20.717 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:20.717 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:20.717 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:20.717 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:20.718 bdev_null0 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:20.718 [2024-11-20 16:31:55.105915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:20.718 bdev_null1 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:20.718 { 00:37:20.718 "params": { 00:37:20.718 "name": "Nvme$subsystem", 00:37:20.718 "trtype": "$TEST_TRANSPORT", 00:37:20.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:20.718 "adrfam": "ipv4", 00:37:20.718 "trsvcid": "$NVMF_PORT", 00:37:20.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:20.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:20.718 "hdgst": ${hdgst:-false}, 00:37:20.718 "ddgst": ${ddgst:-false} 00:37:20.718 }, 00:37:20.718 "method": "bdev_nvme_attach_controller" 00:37:20.718 } 00:37:20.718 EOF 00:37:20.718 )") 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:20.718 { 00:37:20.718 "params": { 00:37:20.718 "name": "Nvme$subsystem", 00:37:20.718 "trtype": "$TEST_TRANSPORT", 00:37:20.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:20.718 "adrfam": "ipv4", 00:37:20.718 "trsvcid": "$NVMF_PORT", 00:37:20.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:20.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:20.718 "hdgst": ${hdgst:-false}, 00:37:20.718 "ddgst": ${ddgst:-false} 00:37:20.718 }, 00:37:20.718 "method": "bdev_nvme_attach_controller" 00:37:20.718 } 00:37:20.718 EOF 00:37:20.718 )") 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:37:20.718 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:20.718 "params": { 00:37:20.718 "name": "Nvme0", 00:37:20.718 "trtype": "tcp", 00:37:20.718 "traddr": "10.0.0.2", 00:37:20.718 "adrfam": "ipv4", 00:37:20.718 "trsvcid": "4420", 00:37:20.718 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:20.718 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:20.718 "hdgst": false, 00:37:20.718 "ddgst": false 00:37:20.718 }, 00:37:20.718 "method": "bdev_nvme_attach_controller" 00:37:20.718 },{ 00:37:20.718 "params": { 00:37:20.718 "name": "Nvme1", 00:37:20.718 "trtype": "tcp", 00:37:20.718 "traddr": "10.0.0.2", 00:37:20.718 "adrfam": "ipv4", 00:37:20.718 "trsvcid": "4420", 00:37:20.718 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:20.718 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:20.718 "hdgst": false, 00:37:20.719 "ddgst": false 00:37:20.719 }, 00:37:20.719 "method": "bdev_nvme_attach_controller" 00:37:20.719 }' 00:37:20.719 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:20.719 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:20.719 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:20.719 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:20.719 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:20.719 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:20.719 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:20.719 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:20.719 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:20.719 16:31:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:20.719 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:20.719 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:20.719 fio-3.35 00:37:20.719 Starting 2 threads 00:37:30.718 00:37:30.718 filename0: (groupid=0, jobs=1): err= 0: pid=1588163: Wed Nov 20 16:32:06 2024 00:37:30.718 read: IOPS=99, BW=398KiB/s (407kB/s)(3984KiB/10012msec) 00:37:30.718 slat (nsec): min=5446, max=33059, avg=6390.89, stdev=1486.86 00:37:30.718 clat (usec): min=680, max=42159, avg=40188.14, stdev=5656.13 00:37:30.718 lat (usec): min=688, max=42192, avg=40194.54, stdev=5655.89 00:37:30.718 clat percentiles (usec): 00:37:30.718 | 1.00th=[ 701], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:30.718 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:30.718 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:30.718 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:30.718 | 99.99th=[42206] 00:37:30.718 bw ( KiB/s): min= 384, max= 448, per=50.16%, avg=396.80, stdev=19.14, samples=20 00:37:30.718 iops : min= 96, max= 112, avg=99.20, stdev= 4.79, samples=20 00:37:30.718 lat (usec) : 750=1.91%, 1000=0.10% 00:37:30.718 lat (msec) : 50=97.99% 00:37:30.718 cpu : usr=95.85%, sys=3.94%, ctx=12, majf=0, minf=96 00:37:30.718 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:30.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.718 issued rwts: total=996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.718 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:30.718 filename1: (groupid=0, jobs=1): err= 0: pid=1588164: Wed Nov 20 16:32:06 2024 00:37:30.718 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10007msec) 00:37:30.718 slat (nsec): min=5453, max=31723, avg=6270.00, stdev=1563.88 00:37:30.718 clat (usec): min=824, max=42389, avg=40824.99, stdev=2563.34 00:37:30.718 lat (usec): min=829, max=42421, avg=40831.26, stdev=2563.40 00:37:30.718 clat percentiles (usec): 00:37:30.718 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:30.718 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:30.718 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:30.718 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:37:30.718 | 99.99th=[42206] 00:37:30.718 bw ( KiB/s): min= 384, max= 416, per=49.40%, avg=390.40, stdev=13.13, samples=20 00:37:30.718 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:37:30.718 lat (usec) : 1000=0.41% 00:37:30.718 lat (msec) : 50=99.59% 00:37:30.718 cpu : usr=95.72%, sys=4.08%, ctx=13, majf=0, minf=164 00:37:30.718 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:30.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.718 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.718 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:30.718 00:37:30.718 Run status group 0 (all jobs): 00:37:30.718 READ: bw=789KiB/s (808kB/s), 392KiB/s-398KiB/s (401kB/s-407kB/s), io=7904KiB (8094kB), run=10007-10012msec 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.718 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:30.719 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.719 00:37:30.719 real 0m11.398s 00:37:30.719 user 0m36.814s 00:37:30.719 sys 0m1.170s 00:37:30.719 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:30.719 16:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:30.719 ************************************ 00:37:30.719 END TEST fio_dif_1_multi_subsystems 00:37:30.719 ************************************ 00:37:30.719 16:32:06 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:30.719 16:32:06 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:30.719 16:32:06 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:30.719 16:32:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:30.719 ************************************ 00:37:30.719 START TEST fio_dif_rand_params 00:37:30.719 ************************************ 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:30.719 bdev_null0 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:30.719 [2024-11-20 16:32:06.587396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:30.719 { 00:37:30.719 "params": { 00:37:30.719 "name": "Nvme$subsystem", 00:37:30.719 "trtype": "$TEST_TRANSPORT", 00:37:30.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:30.719 "adrfam": "ipv4", 00:37:30.719 "trsvcid": "$NVMF_PORT", 00:37:30.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:30.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:30.719 "hdgst": ${hdgst:-false}, 00:37:30.719 "ddgst": ${ddgst:-false} 00:37:30.719 }, 00:37:30.719 "method": "bdev_nvme_attach_controller" 00:37:30.719 } 00:37:30.719 EOF 00:37:30.719 )") 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:30.719 "params": { 00:37:30.719 "name": "Nvme0", 00:37:30.719 "trtype": "tcp", 00:37:30.719 "traddr": "10.0.0.2", 00:37:30.719 "adrfam": "ipv4", 00:37:30.719 "trsvcid": "4420", 00:37:30.719 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:30.719 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:30.719 "hdgst": false, 00:37:30.719 "ddgst": false 00:37:30.719 }, 00:37:30.719 "method": "bdev_nvme_attach_controller" 00:37:30.719 }' 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:30.719 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:30.999 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:30.999 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:30.999 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:30.999 16:32:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:31.340 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:31.340 ... 00:37:31.340 fio-3.35 00:37:31.340 Starting 3 threads 00:37:37.915 00:37:37.915 filename0: (groupid=0, jobs=1): err= 0: pid=1590421: Wed Nov 20 16:32:12 2024 00:37:37.915 read: IOPS=226, BW=28.4MiB/s (29.7MB/s)(142MiB/5007msec) 00:37:37.915 slat (nsec): min=5461, max=67131, avg=7565.72, stdev=2458.76 00:37:37.915 clat (usec): min=4319, max=90429, avg=13209.60, stdev=15947.76 00:37:37.915 lat (usec): min=4328, max=90435, avg=13217.16, stdev=15947.72 00:37:37.915 clat percentiles (usec): 00:37:37.915 | 1.00th=[ 4686], 5.00th=[ 5342], 10.00th=[ 5800], 20.00th=[ 6521], 00:37:37.915 | 30.00th=[ 6980], 40.00th=[ 7373], 50.00th=[ 7701], 60.00th=[ 8029], 00:37:37.915 | 70.00th=[ 8455], 80.00th=[ 9241], 90.00th=[47973], 95.00th=[49021], 00:37:37.915 | 99.00th=[88605], 99.50th=[88605], 99.90th=[90702], 99.95th=[90702], 00:37:37.915 | 99.99th=[90702] 00:37:37.915 bw ( KiB/s): min=15616, max=50176, per=25.46%, avg=29004.80, stdev=10184.42, samples=10 00:37:37.915 iops : min= 122, max= 392, avg=226.60, stdev=79.57, samples=10 00:37:37.915 lat (msec) : 10=84.77%, 20=2.73%, 50=9.42%, 100=3.08% 00:37:37.915 cpu : usr=95.19%, sys=4.53%, ctx=11, majf=0, minf=116 00:37:37.915 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:37.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.915 issued rwts: total=1136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.915 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:37.915 filename0: (groupid=0, jobs=1): err= 0: pid=1590422: Wed Nov 20 16:32:12 2024 00:37:37.915 read: IOPS=368, BW=46.1MiB/s (48.3MB/s)(233MiB/5045msec) 00:37:37.915 slat (nsec): min=5445, max=31784, avg=6519.34, stdev=1440.48 00:37:37.915 clat (usec): min=3941, max=90011, avg=8105.20, stdev=6089.32 00:37:37.915 lat (usec): min=3947, max=90020, avg=8111.72, stdev=6089.53 00:37:37.915 clat percentiles (usec): 00:37:37.915 | 1.00th=[ 4555], 5.00th=[ 4948], 10.00th=[ 5276], 20.00th=[ 5800], 00:37:37.915 | 30.00th=[ 6259], 40.00th=[ 6652], 50.00th=[ 7177], 60.00th=[ 7701], 00:37:37.915 | 70.00th=[ 8225], 80.00th=[ 8979], 90.00th=[ 9896], 95.00th=[10683], 00:37:37.915 | 99.00th=[47449], 99.50th=[48497], 99.90th=[89654], 99.95th=[89654], 00:37:37.915 | 99.99th=[89654] 00:37:37.915 bw ( KiB/s): min=33024, max=54528, per=41.75%, avg=47564.80, stdev=7443.88, samples=10 00:37:37.915 iops : min= 258, max= 426, avg=371.60, stdev=58.16, samples=10 00:37:37.915 lat (msec) : 4=0.11%, 10=91.51%, 20=6.61%, 50=1.51%, 100=0.27% 00:37:37.915 cpu : usr=92.53%, sys=7.18%, ctx=18, majf=0, minf=123 00:37:37.915 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:37.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.915 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.915 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:37.915 filename0: (groupid=0, jobs=1): err= 0: pid=1590423: Wed Nov 20 16:32:12 2024 00:37:37.915 read: IOPS=296, BW=37.1MiB/s (38.9MB/s)(187MiB/5047msec) 00:37:37.915 slat (nsec): min=5462, max=36661, avg=7218.65, stdev=1654.12 00:37:37.915 clat (usec): min=4440, max=89376, avg=10055.25, stdev=10310.85 00:37:37.915 lat (usec): min=4446, max=89384, avg=10062.47, stdev=10311.07 00:37:37.915 clat percentiles (usec): 00:37:37.915 | 1.00th=[ 4817], 5.00th=[ 5407], 10.00th=[ 5866], 20.00th=[ 6587], 00:37:37.915 | 30.00th=[ 7177], 40.00th=[ 7570], 50.00th=[ 8029], 60.00th=[ 8455], 00:37:37.915 | 70.00th=[ 8979], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11863], 00:37:37.915 | 99.00th=[50594], 99.50th=[88605], 99.90th=[89654], 99.95th=[89654], 00:37:37.915 | 99.99th=[89654] 00:37:37.915 bw ( KiB/s): min=23808, max=46848, per=33.55%, avg=38220.80, stdev=7447.06, samples=10 00:37:37.915 iops : min= 186, max= 366, avg=298.60, stdev=58.18, samples=10 00:37:37.915 lat (msec) : 10=81.82%, 20=13.90%, 50=3.01%, 100=1.27% 00:37:37.915 cpu : usr=94.15%, sys=5.59%, ctx=13, majf=0, minf=114 00:37:37.915 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:37.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.915 issued rwts: total=1496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.915 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:37.915 00:37:37.915 Run status group 0 (all jobs): 00:37:37.915 READ: bw=111MiB/s (117MB/s), 28.4MiB/s-46.1MiB/s (29.7MB/s-48.3MB/s), io=562MiB (589MB), run=5007-5047msec 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.915 bdev_null0 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.915 [2024-11-20 16:32:12.860515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.915 bdev_null1 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:37.915 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.916 bdev_null2 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:37.916 { 00:37:37.916 "params": { 00:37:37.916 "name": "Nvme$subsystem", 00:37:37.916 "trtype": "$TEST_TRANSPORT", 00:37:37.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:37.916 "adrfam": "ipv4", 00:37:37.916 "trsvcid": "$NVMF_PORT", 00:37:37.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:37.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:37.916 "hdgst": ${hdgst:-false}, 00:37:37.916 "ddgst": ${ddgst:-false} 00:37:37.916 }, 00:37:37.916 "method": "bdev_nvme_attach_controller" 00:37:37.916 } 00:37:37.916 EOF 00:37:37.916 )") 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:37.916 { 00:37:37.916 "params": { 00:37:37.916 "name": "Nvme$subsystem", 00:37:37.916 "trtype": "$TEST_TRANSPORT", 00:37:37.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:37.916 "adrfam": "ipv4", 00:37:37.916 "trsvcid": "$NVMF_PORT", 00:37:37.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:37.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:37.916 "hdgst": ${hdgst:-false}, 00:37:37.916 "ddgst": ${ddgst:-false} 00:37:37.916 }, 00:37:37.916 "method": "bdev_nvme_attach_controller" 00:37:37.916 } 00:37:37.916 EOF 00:37:37.916 )") 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:37.916 { 00:37:37.916 "params": { 00:37:37.916 "name": "Nvme$subsystem", 00:37:37.916 "trtype": "$TEST_TRANSPORT", 00:37:37.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:37.916 "adrfam": "ipv4", 00:37:37.916 "trsvcid": "$NVMF_PORT", 00:37:37.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:37.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:37.916 "hdgst": ${hdgst:-false}, 00:37:37.916 "ddgst": ${ddgst:-false} 00:37:37.916 }, 00:37:37.916 "method": "bdev_nvme_attach_controller" 00:37:37.916 } 00:37:37.916 EOF 00:37:37.916 )") 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:37.916 16:32:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:37.916 "params": { 00:37:37.916 "name": "Nvme0", 00:37:37.916 "trtype": "tcp", 00:37:37.916 "traddr": "10.0.0.2", 00:37:37.916 "adrfam": "ipv4", 00:37:37.916 "trsvcid": "4420", 00:37:37.916 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:37.916 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:37.916 "hdgst": false, 00:37:37.916 "ddgst": false 00:37:37.916 }, 00:37:37.916 "method": "bdev_nvme_attach_controller" 00:37:37.916 },{ 00:37:37.916 "params": { 00:37:37.916 "name": "Nvme1", 00:37:37.916 "trtype": "tcp", 00:37:37.916 "traddr": "10.0.0.2", 00:37:37.916 "adrfam": "ipv4", 00:37:37.916 "trsvcid": "4420", 00:37:37.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:37.916 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:37.916 "hdgst": false, 00:37:37.916 "ddgst": false 00:37:37.916 }, 00:37:37.916 "method": "bdev_nvme_attach_controller" 00:37:37.916 },{ 00:37:37.916 "params": { 00:37:37.916 "name": "Nvme2", 00:37:37.916 "trtype": "tcp", 00:37:37.916 "traddr": "10.0.0.2", 00:37:37.916 "adrfam": "ipv4", 00:37:37.916 "trsvcid": "4420", 00:37:37.916 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:37.916 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:37.916 "hdgst": false, 00:37:37.916 "ddgst": false 00:37:37.916 }, 00:37:37.916 "method": "bdev_nvme_attach_controller" 00:37:37.916 }' 00:37:37.916 16:32:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:37.916 16:32:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:37.916 16:32:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:37.916 16:32:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:37.916 16:32:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:37.916 16:32:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:37.916 16:32:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:37.916 16:32:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:37.916 16:32:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:37.916 16:32:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:37.916 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:37.916 ... 00:37:37.916 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:37.916 ... 00:37:37.916 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:37.916 ... 00:37:37.916 fio-3.35 00:37:37.916 Starting 24 threads 00:37:50.163 00:37:50.163 filename0: (groupid=0, jobs=1): err= 0: pid=1591912: Wed Nov 20 16:32:24 2024 00:37:50.163 read: IOPS=562, BW=2250KiB/s (2304kB/s)(22.0MiB/10017msec) 00:37:50.163 slat (usec): min=5, max=103, avg=17.48, stdev=15.34 00:37:50.163 clat (msec): min=4, max=572, avg=28.30, stdev=41.09 00:37:50.163 lat (msec): min=4, max=572, avg=28.32, stdev=41.09 00:37:50.163 clat percentiles (msec): 00:37:50.163 | 1.00th=[ 14], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 23], 00:37:50.163 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:37:50.163 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:37:50.163 | 99.00th=[ 342], 99.50th=[ 355], 99.90th=[ 376], 99.95th=[ 376], 00:37:50.163 | 99.99th=[ 575] 00:37:50.163 bw ( KiB/s): min= 144, max= 2949, per=4.14%, avg=2247.85, stdev=1001.73, samples=20 00:37:50.163 iops : min= 36, max= 737, avg=561.95, stdev=250.42, samples=20 00:37:50.163 lat (msec) : 10=0.51%, 20=2.17%, 50=95.62%, 250=0.28%, 500=1.38% 00:37:50.163 lat (msec) : 750=0.04% 00:37:50.163 cpu : usr=99.11%, sys=0.57%, ctx=14, majf=0, minf=51 00:37:50.163 IO depths : 1=5.6%, 2=11.6%, 4=24.1%, 8=51.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:37:50.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.163 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.163 issued rwts: total=5635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.163 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.163 filename0: (groupid=0, jobs=1): err= 0: pid=1591913: Wed Nov 20 16:32:24 2024 00:37:50.163 read: IOPS=574, BW=2297KiB/s (2352kB/s)(22.5MiB/10018msec) 00:37:50.163 slat (usec): min=5, max=131, avg=18.51, stdev=18.82 00:37:50.163 clat (msec): min=5, max=549, avg=27.72, stdev=42.23 00:37:50.163 lat (msec): min=5, max=549, avg=27.74, stdev=42.23 00:37:50.163 clat percentiles (msec): 00:37:50.163 | 1.00th=[ 10], 5.00th=[ 16], 10.00th=[ 21], 20.00th=[ 23], 00:37:50.163 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:50.163 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:37:50.163 | 99.00th=[ 338], 99.50th=[ 363], 99.90th=[ 550], 99.95th=[ 550], 00:37:50.163 | 99.99th=[ 550] 00:37:50.163 bw ( KiB/s): min= 128, max= 3024, per=4.23%, avg=2294.40, stdev=1013.22, samples=20 00:37:50.163 iops : min= 32, max= 756, avg=573.60, stdev=253.31, samples=20 00:37:50.163 lat (msec) : 10=1.23%, 20=8.50%, 50=88.60%, 250=0.28%, 500=1.29% 00:37:50.163 lat (msec) : 750=0.10% 00:37:50.163 cpu : usr=99.00%, sys=0.68%, ctx=14, majf=0, minf=31 00:37:50.163 IO depths : 1=5.3%, 2=10.7%, 4=22.4%, 8=54.4%, 16=7.3%, 32=0.0%, >=64=0.0% 00:37:50.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.163 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.163 issued rwts: total=5752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.163 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.163 filename0: (groupid=0, jobs=1): err= 0: pid=1591914: Wed Nov 20 16:32:24 2024 00:37:50.163 read: IOPS=574, BW=2297KiB/s (2352kB/s)(22.4MiB/10006msec) 00:37:50.163 slat (usec): min=5, max=100, avg=21.05, stdev=17.46 00:37:50.163 clat (msec): min=6, max=479, avg=27.70, stdev=39.64 00:37:50.163 lat (msec): min=7, max=479, avg=27.72, stdev=39.63 00:37:50.163 clat percentiles (msec): 00:37:50.163 | 1.00th=[ 12], 5.00th=[ 15], 10.00th=[ 18], 20.00th=[ 22], 00:37:50.163 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:50.163 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 26], 95.00th=[ 31], 00:37:50.163 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 363], 99.95th=[ 481], 00:37:50.163 | 99.99th=[ 481] 00:37:50.163 bw ( KiB/s): min= 176, max= 3056, per=4.17%, avg=2264.42, stdev=1048.52, samples=19 00:37:50.163 iops : min= 44, max= 764, avg=566.11, stdev=262.13, samples=19 00:37:50.163 lat (msec) : 10=0.52%, 20=14.72%, 50=82.98%, 250=0.28%, 500=1.50% 00:37:50.163 cpu : usr=98.90%, sys=0.77%, ctx=14, majf=0, minf=23 00:37:50.163 IO depths : 1=2.8%, 2=5.7%, 4=13.7%, 8=66.9%, 16=10.8%, 32=0.0%, >=64=0.0% 00:37:50.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.163 complete : 0=0.0%, 4=91.2%, 8=4.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.163 issued rwts: total=5746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.163 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.163 filename0: (groupid=0, jobs=1): err= 0: pid=1591915: Wed Nov 20 16:32:24 2024 00:37:50.163 read: IOPS=557, BW=2229KiB/s (2282kB/s)(21.8MiB/10003msec) 00:37:50.163 slat (usec): min=5, max=104, avg=29.30, stdev=18.61 00:37:50.163 clat (msec): min=12, max=679, avg=28.47, stdev=45.66 00:37:50.163 lat (msec): min=12, max=679, avg=28.50, stdev=45.66 00:37:50.163 clat percentiles (msec): 00:37:50.163 | 1.00th=[ 15], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 23], 00:37:50.163 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:37:50.163 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 26], 00:37:50.163 | 99.00th=[ 342], 99.50th=[ 359], 99.90th=[ 676], 99.95th=[ 676], 00:37:50.163 | 99.99th=[ 676] 00:37:50.163 bw ( KiB/s): min= 112, max= 2816, per=4.05%, avg=2198.74, stdev=1019.15, samples=19 00:37:50.163 iops : min= 28, max= 704, avg=549.68, stdev=254.79, samples=19 00:37:50.163 lat (msec) : 20=3.61%, 50=94.85%, 250=0.29%, 500=1.08%, 750=0.18% 00:37:50.163 cpu : usr=98.87%, sys=0.81%, ctx=14, majf=0, minf=24 00:37:50.163 IO depths : 1=4.7%, 2=10.0%, 4=22.0%, 8=55.3%, 16=8.0%, 32=0.0%, >=64=0.0% 00:37:50.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.163 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.163 issued rwts: total=5574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.163 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.163 filename0: (groupid=0, jobs=1): err= 0: pid=1591916: Wed Nov 20 16:32:24 2024 00:37:50.163 read: IOPS=555, BW=2221KiB/s (2274kB/s)(21.7MiB/10007msec) 00:37:50.163 slat (usec): min=5, max=114, avg=19.75, stdev=18.95 00:37:50.163 clat (msec): min=6, max=609, avg=28.69, stdev=48.38 00:37:50.163 lat (msec): min=6, max=609, avg=28.71, stdev=48.38 00:37:50.163 clat percentiles (msec): 00:37:50.163 | 1.00th=[ 14], 5.00th=[ 17], 10.00th=[ 20], 20.00th=[ 23], 00:37:50.163 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:37:50.163 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 28], 95.00th=[ 31], 00:37:50.163 | 99.00th=[ 351], 99.50th=[ 481], 99.90th=[ 567], 99.95th=[ 567], 00:37:50.163 | 99.99th=[ 609] 00:37:50.163 bw ( KiB/s): min= 128, max= 2832, per=4.02%, avg=2180.21, stdev=1030.24, samples=19 00:37:50.163 iops : min= 32, max= 708, avg=545.05, stdev=257.56, samples=19 00:37:50.163 lat (msec) : 10=0.41%, 20=10.48%, 50=87.78%, 250=0.18%, 500=1.01% 00:37:50.163 lat (msec) : 750=0.14% 00:37:50.163 cpu : usr=99.10%, sys=0.57%, ctx=14, majf=0, minf=26 00:37:50.163 IO depths : 1=1.1%, 2=2.3%, 4=8.1%, 8=74.7%, 16=13.8%, 32=0.0%, >=64=0.0% 00:37:50.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.163 complete : 0=0.0%, 4=90.0%, 8=6.7%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.163 issued rwts: total=5556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.163 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.163 filename0: (groupid=0, jobs=1): err= 0: pid=1591917: Wed Nov 20 16:32:24 2024 00:37:50.163 read: IOPS=567, BW=2271KiB/s (2325kB/s)(22.2MiB/10005msec) 00:37:50.163 slat (usec): min=5, max=176, avg=23.70, stdev=22.79 00:37:50.163 clat (msec): min=8, max=509, avg=27.99, stdev=42.19 00:37:50.163 lat (msec): min=8, max=509, avg=28.01, stdev=42.18 00:37:50.163 clat percentiles (msec): 00:37:50.163 | 1.00th=[ 13], 5.00th=[ 16], 10.00th=[ 20], 20.00th=[ 23], 00:37:50.164 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:50.164 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 26], 95.00th=[ 29], 00:37:50.164 | 99.00th=[ 338], 99.50th=[ 342], 99.90th=[ 502], 99.95th=[ 510], 00:37:50.164 | 99.99th=[ 510] 00:37:50.164 bw ( KiB/s): min= 128, max= 2928, per=4.12%, avg=2236.89, stdev=1016.91, samples=19 00:37:50.164 iops : min= 32, max= 732, avg=559.21, stdev=254.22, samples=19 00:37:50.164 lat (msec) : 10=0.28%, 20=11.36%, 50=86.67%, 250=0.28%, 500=1.23% 00:37:50.164 lat (msec) : 750=0.18% 00:37:50.164 cpu : usr=99.00%, sys=0.67%, ctx=80, majf=0, minf=25 00:37:50.164 IO depths : 1=3.8%, 2=7.6%, 4=16.9%, 8=62.2%, 16=9.5%, 32=0.0%, >=64=0.0% 00:37:50.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.164 complete : 0=0.0%, 4=92.0%, 8=3.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.164 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.164 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.164 filename0: (groupid=0, jobs=1): err= 0: pid=1591918: Wed Nov 20 16:32:24 2024 00:37:50.164 read: IOPS=565, BW=2262KiB/s (2316kB/s)(22.1MiB/10017msec) 00:37:50.164 slat (usec): min=5, max=116, avg=19.48, stdev=17.24 00:37:50.164 clat (msec): min=9, max=464, avg=28.14, stdev=40.20 00:37:50.164 lat (msec): min=9, max=464, avg=28.16, stdev=40.19 00:37:50.164 clat percentiles (msec): 00:37:50.164 | 1.00th=[ 14], 5.00th=[ 19], 10.00th=[ 23], 20.00th=[ 23], 00:37:50.164 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:37:50.164 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:37:50.164 | 99.00th=[ 326], 99.50th=[ 347], 99.90th=[ 456], 99.95th=[ 456], 00:37:50.164 | 99.99th=[ 464] 00:37:50.164 bw ( KiB/s): min= 128, max= 2997, per=4.17%, avg=2259.45, stdev=999.35, samples=20 00:37:50.164 iops : min= 32, max= 749, avg=564.85, stdev=249.83, samples=20 00:37:50.164 lat (msec) : 10=0.07%, 20=5.67%, 50=92.46%, 250=0.39%, 500=1.41% 00:37:50.164 cpu : usr=98.96%, sys=0.76%, ctx=14, majf=0, minf=32 00:37:50.164 IO depths : 1=5.4%, 2=10.9%, 4=22.5%, 8=54.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:37:50.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.164 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.164 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.164 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.164 filename0: (groupid=0, jobs=1): err= 0: pid=1591919: Wed Nov 20 16:32:24 2024 00:37:50.164 read: IOPS=561, BW=2244KiB/s (2298kB/s)(21.9MiB/10005msec) 00:37:50.164 slat (usec): min=5, max=180, avg=22.75, stdev=20.47 00:37:50.164 clat (msec): min=6, max=563, avg=28.34, stdev=43.93 00:37:50.164 lat (msec): min=6, max=563, avg=28.36, stdev=43.93 00:37:50.164 clat percentiles (msec): 00:37:50.164 | 1.00th=[ 13], 5.00th=[ 16], 10.00th=[ 21], 20.00th=[ 23], 00:37:50.164 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:37:50.164 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 26], 95.00th=[ 31], 00:37:50.164 | 99.00th=[ 330], 99.50th=[ 384], 99.90th=[ 542], 99.95th=[ 542], 00:37:50.164 | 99.99th=[ 567] 00:37:50.164 bw ( KiB/s): min= 16, max= 3056, per=4.08%, avg=2210.53, stdev=1025.25, samples=19 00:37:50.164 iops : min= 4, max= 764, avg=552.63, stdev=256.31, samples=19 00:37:50.164 lat (msec) : 10=0.36%, 20=9.30%, 50=88.39%, 100=0.29%, 250=0.25% 00:37:50.164 lat (msec) : 500=1.14%, 750=0.29% 00:37:50.164 cpu : usr=98.98%, sys=0.70%, ctx=14, majf=0, minf=45 00:37:50.164 IO depths : 1=3.0%, 2=6.1%, 4=14.1%, 8=65.6%, 16=11.2%, 32=0.0%, >=64=0.0% 00:37:50.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.164 complete : 0=0.0%, 4=90.9%, 8=5.1%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.164 issued rwts: total=5614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.164 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.164 filename1: (groupid=0, jobs=1): err= 0: pid=1591920: Wed Nov 20 16:32:24 2024 00:37:50.164 read: IOPS=560, BW=2242KiB/s (2296kB/s)(21.9MiB/10018msec) 00:37:50.164 slat (usec): min=5, max=184, avg=24.16, stdev=23.63 00:37:50.164 clat (msec): min=10, max=489, avg=28.34, stdev=42.75 00:37:50.164 lat (msec): min=10, max=489, avg=28.37, stdev=42.75 00:37:50.164 clat percentiles (msec): 00:37:50.164 | 1.00th=[ 14], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 23], 00:37:50.164 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:37:50.164 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 25], 00:37:50.164 | 99.00th=[ 355], 99.50th=[ 388], 99.90th=[ 447], 99.95th=[ 477], 00:37:50.164 | 99.99th=[ 489] 00:37:50.164 bw ( KiB/s): min= 128, max= 2949, per=4.13%, avg=2240.25, stdev=994.51, samples=20 00:37:50.164 iops : min= 32, max= 737, avg=560.05, stdev=248.62, samples=20 00:37:50.164 lat (msec) : 20=2.56%, 50=95.76%, 250=0.25%, 500=1.42% 00:37:50.164 cpu : usr=98.77%, sys=0.77%, ctx=125, majf=0, minf=44 00:37:50.164 IO depths : 1=5.8%, 2=11.7%, 4=23.9%, 8=51.9%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:50.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.164 complete : 0=0.0%, 4=93.8%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.164 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.164 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.164 filename1: (groupid=0, jobs=1): err= 0: pid=1591922: Wed Nov 20 16:32:24 2024 00:37:50.164 read: IOPS=565, BW=2260KiB/s (2315kB/s)(22.1MiB/10020msec) 00:37:50.164 slat (usec): min=5, max=185, avg=19.85, stdev=21.40 00:37:50.164 clat (msec): min=3, max=488, avg=28.16, stdev=41.67 00:37:50.164 lat (msec): min=3, max=488, avg=28.18, stdev=41.67 00:37:50.164 clat percentiles (msec): 00:37:50.164 | 1.00th=[ 12], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 23], 00:37:50.164 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:37:50.164 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 26], 00:37:50.164 | 99.00th=[ 342], 99.50th=[ 388], 99.90th=[ 447], 99.95th=[ 489], 00:37:50.164 | 99.99th=[ 489] 00:37:50.164 bw ( KiB/s): min= 128, max= 3104, per=4.16%, avg=2258.40, stdev=1010.12, samples=20 00:37:50.164 iops : min= 32, max= 776, avg=564.60, stdev=252.53, samples=20 00:37:50.164 lat (msec) : 4=0.12%, 10=0.79%, 20=2.84%, 50=94.44%, 250=0.39% 00:37:50.164 lat (msec) : 500=1.41% 00:37:50.164 cpu : usr=98.94%, sys=0.74%, ctx=19, majf=0, minf=32 00:37:50.164 IO depths : 1=5.5%, 2=11.2%, 4=23.2%, 8=53.1%, 16=7.1%, 32=0.0%, >=64=0.0% 00:37:50.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.164 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.164 issued rwts: total=5662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.164 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.164 filename1: (groupid=0, jobs=1): err= 0: pid=1591923: Wed Nov 20 16:32:24 2024 00:37:50.164 read: IOPS=567, BW=2270KiB/s (2324kB/s)(22.3MiB/10045msec) 00:37:50.164 slat (usec): min=5, max=132, avg=29.91, stdev=23.45 00:37:50.164 clat (msec): min=6, max=478, avg=27.86, stdev=42.59 00:37:50.164 lat (msec): min=6, max=478, avg=27.89, stdev=42.59 00:37:50.164 clat percentiles (msec): 00:37:50.164 | 1.00th=[ 14], 5.00th=[ 16], 10.00th=[ 19], 20.00th=[ 23], 00:37:50.164 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:50.164 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 30], 00:37:50.164 | 99.00th=[ 355], 99.50th=[ 384], 99.90th=[ 447], 99.95th=[ 472], 00:37:50.164 | 99.99th=[ 481] 00:37:50.164 bw ( KiB/s): min= 128, max= 3072, per=4.20%, avg=2275.85, stdev=1022.49, samples=20 00:37:50.164 iops : min= 32, max= 768, avg=568.95, stdev=255.62, samples=20 00:37:50.164 lat (msec) : 10=0.33%, 20=11.88%, 50=86.14%, 250=0.25%, 500=1.40% 00:37:50.164 cpu : usr=98.95%, sys=0.72%, ctx=15, majf=0, minf=23 00:37:50.164 IO depths : 1=4.0%, 2=8.1%, 4=18.1%, 8=60.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:37:50.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.164 complete : 0=0.0%, 4=92.3%, 8=2.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.164 issued rwts: total=5700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.164 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.164 filename1: (groupid=0, jobs=1): err= 0: pid=1591924: Wed Nov 20 16:32:24 2024 00:37:50.164 read: IOPS=565, BW=2261KiB/s (2315kB/s)(22.1MiB/10006msec) 00:37:50.164 slat (usec): min=5, max=125, avg=24.79, stdev=19.85 00:37:50.164 clat (msec): min=5, max=540, avg=28.11, stdev=42.40 00:37:50.164 lat (msec): min=5, max=540, avg=28.14, stdev=42.39 00:37:50.164 clat percentiles (msec): 00:37:50.164 | 1.00th=[ 13], 5.00th=[ 16], 10.00th=[ 20], 20.00th=[ 23], 00:37:50.164 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:50.164 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 26], 95.00th=[ 31], 00:37:50.164 | 99.00th=[ 330], 99.50th=[ 359], 99.90th=[ 542], 99.95th=[ 542], 00:37:50.164 | 99.99th=[ 542] 00:37:50.164 bw ( KiB/s): min= 128, max= 2928, per=4.10%, avg=2224.00, stdev=1039.56, samples=19 00:37:50.164 iops : min= 32, max= 732, avg=556.00, stdev=259.89, samples=19 00:37:50.164 lat (msec) : 10=0.58%, 20=10.50%, 50=87.21%, 250=0.28%, 500=1.31% 00:37:50.164 lat (msec) : 750=0.11% 00:37:50.164 cpu : usr=98.77%, sys=0.85%, ctx=78, majf=0, minf=41 00:37:50.164 IO depths : 1=3.3%, 2=6.7%, 4=16.7%, 8=63.4%, 16=9.9%, 32=0.0%, >=64=0.0% 00:37:50.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.164 complete : 0=0.0%, 4=91.9%, 8=3.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.164 issued rwts: total=5655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.164 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.164 filename1: (groupid=0, jobs=1): err= 0: pid=1591925: Wed Nov 20 16:32:24 2024 00:37:50.164 read: IOPS=561, BW=2247KiB/s (2301kB/s)(22.0MiB/10014msec) 00:37:50.164 slat (usec): min=5, max=120, avg=27.27, stdev=20.19 00:37:50.164 clat (msec): min=8, max=445, avg=28.23, stdev=42.76 00:37:50.164 lat (msec): min=8, max=445, avg=28.26, stdev=42.76 00:37:50.164 clat percentiles (msec): 00:37:50.164 | 1.00th=[ 12], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 23], 00:37:50.164 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:50.164 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 26], 00:37:50.164 | 99.00th=[ 355], 99.50th=[ 384], 99.90th=[ 447], 99.95th=[ 447], 00:37:50.164 | 99.99th=[ 447] 00:37:50.164 bw ( KiB/s): min= 128, max= 3072, per=4.14%, avg=2244.00, stdev=1011.50, samples=20 00:37:50.164 iops : min= 32, max= 768, avg=561.00, stdev=252.88, samples=20 00:37:50.164 lat (msec) : 10=0.34%, 20=3.87%, 50=94.08%, 250=0.28%, 500=1.42% 00:37:50.164 cpu : usr=99.07%, sys=0.60%, ctx=24, majf=0, minf=37 00:37:50.165 IO depths : 1=5.4%, 2=11.2%, 4=23.7%, 8=52.6%, 16=7.2%, 32=0.0%, >=64=0.0% 00:37:50.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.165 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.165 issued rwts: total=5626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.165 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.165 filename1: (groupid=0, jobs=1): err= 0: pid=1591926: Wed Nov 20 16:32:24 2024 00:37:50.165 read: IOPS=562, BW=2250KiB/s (2304kB/s)(22.0MiB/10005msec) 00:37:50.165 slat (usec): min=5, max=129, avg=31.51, stdev=20.20 00:37:50.165 clat (msec): min=7, max=446, avg=28.17, stdev=42.39 00:37:50.165 lat (msec): min=7, max=446, avg=28.21, stdev=42.39 00:37:50.165 clat percentiles (msec): 00:37:50.165 | 1.00th=[ 14], 5.00th=[ 21], 10.00th=[ 23], 20.00th=[ 23], 00:37:50.165 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:50.165 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 26], 00:37:50.165 | 99.00th=[ 355], 99.50th=[ 384], 99.90th=[ 447], 99.95th=[ 447], 00:37:50.165 | 99.99th=[ 447] 00:37:50.165 bw ( KiB/s): min= 128, max= 2864, per=4.10%, avg=2221.47, stdev=1031.56, samples=19 00:37:50.165 iops : min= 32, max= 716, avg=555.37, stdev=257.89, samples=19 00:37:50.165 lat (msec) : 10=0.07%, 20=4.37%, 50=93.85%, 250=0.28%, 500=1.42% 00:37:50.165 cpu : usr=99.05%, sys=0.59%, ctx=22, majf=0, minf=43 00:37:50.165 IO depths : 1=5.4%, 2=11.1%, 4=23.2%, 8=53.1%, 16=7.1%, 32=0.0%, >=64=0.0% 00:37:50.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.165 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.165 issued rwts: total=5628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.165 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.165 filename1: (groupid=0, jobs=1): err= 0: pid=1591927: Wed Nov 20 16:32:24 2024 00:37:50.165 read: IOPS=599, BW=2396KiB/s (2454kB/s)(23.4MiB/10020msec) 00:37:50.165 slat (usec): min=5, max=176, avg=22.76, stdev=23.63 00:37:50.165 clat (usec): min=690, max=618688, avg=26536.33, stdev=42385.16 00:37:50.165 lat (usec): min=701, max=618702, avg=26559.08, stdev=42385.43 00:37:50.165 clat percentiles (usec): 00:37:50.165 | 1.00th=[ 1369], 5.00th=[ 12125], 10.00th=[ 15533], 20.00th=[ 21890], 00:37:50.165 | 30.00th=[ 22676], 40.00th=[ 22676], 50.00th=[ 22938], 60.00th=[ 23200], 00:37:50.165 | 70.00th=[ 23462], 80.00th=[ 23725], 90.00th=[ 24249], 95.00th=[ 25035], 00:37:50.165 | 99.00th=[346031], 99.50th=[358613], 99.90th=[476054], 99.95th=[476054], 00:37:50.165 | 99.99th=[616563] 00:37:50.165 bw ( KiB/s): min= 128, max= 5040, per=4.41%, avg=2394.40, stdev=1157.41, samples=20 00:37:50.165 iops : min= 32, max= 1260, avg=598.60, stdev=289.35, samples=20 00:37:50.165 lat (usec) : 750=0.07%, 1000=0.07% 00:37:50.165 lat (msec) : 2=2.92%, 4=0.70%, 10=0.70%, 20=12.08%, 50=81.91% 00:37:50.165 lat (msec) : 250=0.27%, 500=1.27%, 750=0.03% 00:37:50.165 cpu : usr=98.57%, sys=0.99%, ctx=44, majf=0, minf=61 00:37:50.165 IO depths : 1=4.6%, 2=9.4%, 4=20.1%, 8=57.7%, 16=8.1%, 32=0.0%, >=64=0.0% 00:37:50.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.165 complete : 0=0.0%, 4=92.8%, 8=1.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.165 issued rwts: total=6002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.165 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.165 filename1: (groupid=0, jobs=1): err= 0: pid=1591928: Wed Nov 20 16:32:24 2024 00:37:50.165 read: IOPS=563, BW=2256KiB/s (2310kB/s)(22.0MiB/10005msec) 00:37:50.165 slat (usec): min=5, max=107, avg=25.59, stdev=18.74 00:37:50.165 clat (msec): min=6, max=524, avg=28.16, stdev=40.46 00:37:50.165 lat (msec): min=6, max=524, avg=28.18, stdev=40.46 00:37:50.165 clat percentiles (msec): 00:37:50.165 | 1.00th=[ 13], 5.00th=[ 19], 10.00th=[ 23], 20.00th=[ 23], 00:37:50.165 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:50.165 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 27], 00:37:50.165 | 99.00th=[ 330], 99.50th=[ 342], 99.90th=[ 439], 99.95th=[ 527], 00:37:50.165 | 99.99th=[ 527] 00:37:50.165 bw ( KiB/s): min= 176, max= 3184, per=4.09%, avg=2220.63, stdev=1028.97, samples=19 00:37:50.165 iops : min= 44, max= 796, avg=555.16, stdev=257.24, samples=19 00:37:50.165 lat (msec) : 10=0.28%, 20=6.24%, 50=91.39%, 100=0.28%, 250=0.28% 00:37:50.165 lat (msec) : 500=1.45%, 750=0.07% 00:37:50.165 cpu : usr=98.94%, sys=0.73%, ctx=16, majf=0, minf=55 00:37:50.165 IO depths : 1=4.0%, 2=8.5%, 4=19.0%, 8=59.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:37:50.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.165 complete : 0=0.0%, 4=92.7%, 8=2.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.165 issued rwts: total=5642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.165 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.165 filename2: (groupid=0, jobs=1): err= 0: pid=1591929: Wed Nov 20 16:32:24 2024 00:37:50.165 read: IOPS=585, BW=2341KiB/s (2398kB/s)(22.9MiB/10020msec) 00:37:50.165 slat (usec): min=5, max=116, avg=19.56, stdev=19.22 00:37:50.165 clat (msec): min=5, max=527, avg=27.19, stdev=40.32 00:37:50.165 lat (msec): min=5, max=527, avg=27.21, stdev=40.32 00:37:50.165 clat percentiles (msec): 00:37:50.165 | 1.00th=[ 10], 5.00th=[ 15], 10.00th=[ 17], 20.00th=[ 22], 00:37:50.165 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:50.165 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 27], 00:37:50.165 | 99.00th=[ 317], 99.50th=[ 342], 99.90th=[ 456], 99.95th=[ 527], 00:37:50.165 | 99.99th=[ 527] 00:37:50.165 bw ( KiB/s): min= 176, max= 3344, per=4.31%, avg=2339.60, stdev=1023.68, samples=20 00:37:50.165 iops : min= 44, max= 836, avg=584.90, stdev=255.92, samples=20 00:37:50.165 lat (msec) : 10=1.07%, 20=15.24%, 50=81.94%, 250=0.38%, 500=1.30% 00:37:50.165 lat (msec) : 750=0.07% 00:37:50.165 cpu : usr=98.86%, sys=0.81%, ctx=13, majf=0, minf=37 00:37:50.165 IO depths : 1=2.1%, 2=5.1%, 4=16.4%, 8=65.9%, 16=10.6%, 32=0.0%, >=64=0.0% 00:37:50.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.165 complete : 0=0.0%, 4=91.9%, 8=2.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.165 issued rwts: total=5865,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.165 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.165 filename2: (groupid=0, jobs=1): err= 0: pid=1591931: Wed Nov 20 16:32:24 2024 00:37:50.165 read: IOPS=565, BW=2262KiB/s (2316kB/s)(22.1MiB/10012msec) 00:37:50.165 slat (usec): min=5, max=126, avg=29.50, stdev=24.67 00:37:50.165 clat (msec): min=5, max=471, avg=28.04, stdev=42.55 00:37:50.165 lat (msec): min=5, max=471, avg=28.07, stdev=42.54 00:37:50.165 clat percentiles (msec): 00:37:50.165 | 1.00th=[ 14], 5.00th=[ 18], 10.00th=[ 21], 20.00th=[ 23], 00:37:50.165 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:50.165 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 29], 00:37:50.165 | 99.00th=[ 342], 99.50th=[ 384], 99.90th=[ 472], 99.95th=[ 472], 00:37:50.165 | 99.99th=[ 472] 00:37:50.165 bw ( KiB/s): min= 128, max= 3248, per=4.16%, avg=2258.40, stdev=1015.71, samples=20 00:37:50.165 iops : min= 32, max= 812, avg=564.60, stdev=253.93, samples=20 00:37:50.165 lat (msec) : 10=0.30%, 20=8.81%, 50=89.19%, 250=0.28%, 500=1.41% 00:37:50.165 cpu : usr=99.13%, sys=0.55%, ctx=13, majf=0, minf=29 00:37:50.165 IO depths : 1=4.7%, 2=9.7%, 4=21.2%, 8=56.5%, 16=7.9%, 32=0.0%, >=64=0.0% 00:37:50.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.165 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.165 issued rwts: total=5662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.165 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.165 filename2: (groupid=0, jobs=1): err= 0: pid=1591932: Wed Nov 20 16:32:24 2024 00:37:50.165 read: IOPS=567, BW=2272KiB/s (2326kB/s)(22.2MiB/10005msec) 00:37:50.165 slat (usec): min=5, max=123, avg=28.31, stdev=20.57 00:37:50.165 clat (msec): min=5, max=583, avg=27.94, stdev=42.15 00:37:50.165 lat (msec): min=5, max=583, avg=27.96, stdev=42.14 00:37:50.165 clat percentiles (msec): 00:37:50.165 | 1.00th=[ 14], 5.00th=[ 17], 10.00th=[ 20], 20.00th=[ 23], 00:37:50.165 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:50.165 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 28], 00:37:50.165 | 99.00th=[ 330], 99.50th=[ 384], 99.90th=[ 472], 99.95th=[ 584], 00:37:50.165 | 99.99th=[ 584] 00:37:50.165 bw ( KiB/s): min= 96, max= 3088, per=4.13%, avg=2237.47, stdev=1029.92, samples=19 00:37:50.165 iops : min= 24, max= 772, avg=559.37, stdev=257.48, samples=19 00:37:50.165 lat (msec) : 10=0.42%, 20=9.86%, 50=87.93%, 250=0.39%, 500=1.34% 00:37:50.165 lat (msec) : 750=0.07% 00:37:50.165 cpu : usr=98.90%, sys=0.77%, ctx=13, majf=0, minf=30 00:37:50.165 IO depths : 1=3.6%, 2=8.0%, 4=18.9%, 8=59.9%, 16=9.6%, 32=0.0%, >=64=0.0% 00:37:50.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.165 complete : 0=0.0%, 4=92.6%, 8=2.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.165 issued rwts: total=5682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.165 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.165 filename2: (groupid=0, jobs=1): err= 0: pid=1591933: Wed Nov 20 16:32:24 2024 00:37:50.165 read: IOPS=561, BW=2246KiB/s (2300kB/s)(21.9MiB/10004msec) 00:37:50.165 slat (usec): min=5, max=134, avg=28.91, stdev=20.88 00:37:50.165 clat (msec): min=9, max=384, avg=28.24, stdev=39.18 00:37:50.165 lat (msec): min=9, max=384, avg=28.27, stdev=39.18 00:37:50.165 clat percentiles (msec): 00:37:50.165 | 1.00th=[ 14], 5.00th=[ 18], 10.00th=[ 22], 20.00th=[ 23], 00:37:50.165 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:50.165 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 29], 00:37:50.165 | 99.00th=[ 330], 99.50th=[ 347], 99.90th=[ 384], 99.95th=[ 384], 00:37:50.165 | 99.99th=[ 384] 00:37:50.165 bw ( KiB/s): min= 128, max= 2912, per=4.09%, avg=2217.26, stdev=1024.85, samples=19 00:37:50.165 iops : min= 32, max= 728, avg=554.32, stdev=256.21, samples=19 00:37:50.165 lat (msec) : 10=0.23%, 20=7.32%, 50=90.46%, 250=0.57%, 500=1.42% 00:37:50.165 cpu : usr=99.15%, sys=0.52%, ctx=15, majf=0, minf=30 00:37:50.165 IO depths : 1=4.4%, 2=8.9%, 4=20.1%, 8=58.1%, 16=8.4%, 32=0.0%, >=64=0.0% 00:37:50.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.165 complete : 0=0.0%, 4=92.8%, 8=1.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.165 issued rwts: total=5618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.165 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.165 filename2: (groupid=0, jobs=1): err= 0: pid=1591934: Wed Nov 20 16:32:24 2024 00:37:50.165 read: IOPS=560, BW=2244KiB/s (2298kB/s)(21.9MiB/10008msec) 00:37:50.165 slat (usec): min=4, max=129, avg=26.77, stdev=19.37 00:37:50.166 clat (msec): min=10, max=527, avg=28.30, stdev=40.75 00:37:50.166 lat (msec): min=10, max=527, avg=28.32, stdev=40.75 00:37:50.166 clat percentiles (msec): 00:37:50.166 | 1.00th=[ 14], 5.00th=[ 19], 10.00th=[ 22], 20.00th=[ 23], 00:37:50.166 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:50.166 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 28], 00:37:50.166 | 99.00th=[ 334], 99.50th=[ 342], 99.90th=[ 456], 99.95th=[ 527], 00:37:50.166 | 99.99th=[ 527] 00:37:50.166 bw ( KiB/s): min= 176, max= 2912, per=4.07%, avg=2208.84, stdev=1006.13, samples=19 00:37:50.166 iops : min= 44, max= 728, avg=552.21, stdev=251.53, samples=19 00:37:50.166 lat (msec) : 20=7.20%, 50=90.99%, 250=0.39%, 500=1.35%, 750=0.07% 00:37:50.166 cpu : usr=98.87%, sys=0.80%, ctx=14, majf=0, minf=27 00:37:50.166 IO depths : 1=4.4%, 2=8.9%, 4=19.4%, 8=58.8%, 16=8.6%, 32=0.0%, >=64=0.0% 00:37:50.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.166 complete : 0=0.0%, 4=92.6%, 8=2.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.166 issued rwts: total=5614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.166 filename2: (groupid=0, jobs=1): err= 0: pid=1591935: Wed Nov 20 16:32:24 2024 00:37:50.166 read: IOPS=567, BW=2269KiB/s (2323kB/s)(22.2MiB/10006msec) 00:37:50.166 slat (usec): min=5, max=110, avg=15.23, stdev=15.52 00:37:50.166 clat (msec): min=5, max=365, avg=28.14, stdev=39.24 00:37:50.166 lat (msec): min=5, max=365, avg=28.15, stdev=39.24 00:37:50.166 clat percentiles (msec): 00:37:50.166 | 1.00th=[ 11], 5.00th=[ 16], 10.00th=[ 21], 20.00th=[ 23], 00:37:50.166 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:37:50.166 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 26], 95.00th=[ 31], 00:37:50.166 | 99.00th=[ 334], 99.50th=[ 342], 99.90th=[ 368], 99.95th=[ 368], 00:37:50.166 | 99.99th=[ 368] 00:37:50.166 bw ( KiB/s): min= 176, max= 2880, per=4.11%, avg=2228.47, stdev=1019.67, samples=19 00:37:50.166 iops : min= 44, max= 720, avg=557.11, stdev=254.91, samples=19 00:37:50.166 lat (msec) : 10=0.88%, 20=9.21%, 50=88.04%, 250=0.35%, 500=1.52% 00:37:50.166 cpu : usr=99.15%, sys=0.51%, ctx=14, majf=0, minf=37 00:37:50.166 IO depths : 1=0.1%, 2=0.2%, 4=4.0%, 8=79.7%, 16=16.1%, 32=0.0%, >=64=0.0% 00:37:50.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.166 complete : 0=0.0%, 4=89.6%, 8=8.2%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.166 issued rwts: total=5676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.166 filename2: (groupid=0, jobs=1): err= 0: pid=1591936: Wed Nov 20 16:32:24 2024 00:37:50.166 read: IOPS=568, BW=2272KiB/s (2327kB/s)(22.2MiB/10007msec) 00:37:50.166 slat (usec): min=5, max=109, avg=23.79, stdev=19.96 00:37:50.166 clat (msec): min=8, max=477, avg=27.97, stdev=40.46 00:37:50.166 lat (msec): min=8, max=477, avg=27.99, stdev=40.46 00:37:50.166 clat percentiles (msec): 00:37:50.166 | 1.00th=[ 12], 5.00th=[ 15], 10.00th=[ 19], 20.00th=[ 23], 00:37:50.166 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:37:50.166 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 26], 95.00th=[ 30], 00:37:50.166 | 99.00th=[ 326], 99.50th=[ 351], 99.90th=[ 477], 99.95th=[ 477], 00:37:50.166 | 99.99th=[ 477] 00:37:50.166 bw ( KiB/s): min= 128, max= 3072, per=4.13%, avg=2242.21, stdev=1045.17, samples=19 00:37:50.166 iops : min= 32, max= 768, avg=560.53, stdev=261.28, samples=19 00:37:50.166 lat (msec) : 10=0.48%, 20=11.65%, 50=85.80%, 100=0.28%, 250=0.39% 00:37:50.166 lat (msec) : 500=1.41% 00:37:50.166 cpu : usr=99.03%, sys=0.63%, ctx=18, majf=0, minf=37 00:37:50.166 IO depths : 1=3.9%, 2=7.8%, 4=17.6%, 8=61.6%, 16=9.0%, 32=0.0%, >=64=0.0% 00:37:50.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.166 complete : 0=0.0%, 4=92.0%, 8=2.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.166 issued rwts: total=5684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.166 filename2: (groupid=0, jobs=1): err= 0: pid=1591937: Wed Nov 20 16:32:24 2024 00:37:50.166 read: IOPS=562, BW=2250KiB/s (2304kB/s)(22.0MiB/10014msec) 00:37:50.166 slat (usec): min=5, max=109, avg=24.30, stdev=18.14 00:37:50.166 clat (msec): min=8, max=480, avg=28.24, stdev=42.70 00:37:50.166 lat (msec): min=8, max=480, avg=28.27, stdev=42.69 00:37:50.166 clat percentiles (msec): 00:37:50.166 | 1.00th=[ 13], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 23], 00:37:50.166 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:37:50.166 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:37:50.166 | 99.00th=[ 355], 99.50th=[ 388], 99.90th=[ 447], 99.95th=[ 472], 00:37:50.166 | 99.99th=[ 481] 00:37:50.166 bw ( KiB/s): min= 128, max= 3072, per=4.14%, avg=2246.40, stdev=999.69, samples=20 00:37:50.166 iops : min= 32, max= 768, avg=561.60, stdev=249.92, samples=20 00:37:50.166 lat (msec) : 10=0.30%, 20=2.06%, 50=95.97%, 250=0.25%, 500=1.42% 00:37:50.166 cpu : usr=99.03%, sys=0.64%, ctx=12, majf=0, minf=35 00:37:50.166 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:50.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.166 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.166 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.166 00:37:50.166 Run status group 0 (all jobs): 00:37:50.166 READ: bw=53.0MiB/s (55.5MB/s), 2221KiB/s-2396KiB/s (2274kB/s-2454kB/s), io=532MiB (558MB), run=10003-10045msec 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.166 bdev_null0 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.166 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.167 [2024-11-20 16:32:24.758430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.167 bdev_null1 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:50.167 { 00:37:50.167 "params": { 00:37:50.167 "name": "Nvme$subsystem", 00:37:50.167 "trtype": "$TEST_TRANSPORT", 00:37:50.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:50.167 "adrfam": "ipv4", 00:37:50.167 "trsvcid": "$NVMF_PORT", 00:37:50.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:50.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:50.167 "hdgst": ${hdgst:-false}, 00:37:50.167 "ddgst": ${ddgst:-false} 00:37:50.167 }, 00:37:50.167 "method": "bdev_nvme_attach_controller" 00:37:50.167 } 00:37:50.167 EOF 00:37:50.167 )") 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:50.167 { 00:37:50.167 "params": { 00:37:50.167 "name": "Nvme$subsystem", 00:37:50.167 "trtype": "$TEST_TRANSPORT", 00:37:50.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:50.167 "adrfam": "ipv4", 00:37:50.167 "trsvcid": "$NVMF_PORT", 00:37:50.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:50.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:50.167 "hdgst": ${hdgst:-false}, 00:37:50.167 "ddgst": ${ddgst:-false} 00:37:50.167 }, 00:37:50.167 "method": "bdev_nvme_attach_controller" 00:37:50.167 } 00:37:50.167 EOF 00:37:50.167 )") 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:50.167 "params": { 00:37:50.167 "name": "Nvme0", 00:37:50.167 "trtype": "tcp", 00:37:50.167 "traddr": "10.0.0.2", 00:37:50.167 "adrfam": "ipv4", 00:37:50.167 "trsvcid": "4420", 00:37:50.167 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:50.167 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:50.167 "hdgst": false, 00:37:50.167 "ddgst": false 00:37:50.167 }, 00:37:50.167 "method": "bdev_nvme_attach_controller" 00:37:50.167 },{ 00:37:50.167 "params": { 00:37:50.167 "name": "Nvme1", 00:37:50.167 "trtype": "tcp", 00:37:50.167 "traddr": "10.0.0.2", 00:37:50.167 "adrfam": "ipv4", 00:37:50.167 "trsvcid": "4420", 00:37:50.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:50.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:50.167 "hdgst": false, 00:37:50.167 "ddgst": false 00:37:50.167 }, 00:37:50.167 "method": "bdev_nvme_attach_controller" 00:37:50.167 }' 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:50.167 16:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:50.167 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:50.167 ... 00:37:50.167 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:50.167 ... 00:37:50.167 fio-3.35 00:37:50.167 Starting 4 threads 00:37:55.459 00:37:55.459 filename0: (groupid=0, jobs=1): err= 0: pid=1594126: Wed Nov 20 16:32:30 2024 00:37:55.459 read: IOPS=2981, BW=23.3MiB/s (24.4MB/s)(116MiB/5002msec) 00:37:55.459 slat (nsec): min=7922, max=87330, avg=9125.20, stdev=3252.13 00:37:55.459 clat (usec): min=1018, max=5511, avg=2658.10, stdev=150.99 00:37:55.459 lat (usec): min=1034, max=5543, avg=2667.22, stdev=150.75 00:37:55.459 clat percentiles (usec): 00:37:55.459 | 1.00th=[ 2008], 5.00th=[ 2573], 10.00th=[ 2606], 20.00th=[ 2638], 00:37:55.459 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:37:55.459 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2704], 95.00th=[ 2737], 00:37:55.459 | 99.00th=[ 2966], 99.50th=[ 3228], 99.90th=[ 4359], 99.95th=[ 5145], 00:37:55.459 | 99.99th=[ 5211] 00:37:55.459 bw ( KiB/s): min=23696, max=24416, per=25.11%, avg=23864.89, stdev=213.93, samples=9 00:37:55.459 iops : min= 2962, max= 3052, avg=2983.11, stdev=26.74, samples=9 00:37:55.459 lat (msec) : 2=0.91%, 4=98.95%, 10=0.13% 00:37:55.459 cpu : usr=96.60%, sys=3.14%, ctx=7, majf=0, minf=36 00:37:55.459 IO depths : 1=0.1%, 2=0.1%, 4=73.2%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:55.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.459 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.459 issued rwts: total=14911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:55.459 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:55.459 filename0: (groupid=0, jobs=1): err= 0: pid=1594127: Wed Nov 20 16:32:30 2024 00:37:55.459 read: IOPS=2972, BW=23.2MiB/s (24.3MB/s)(116MiB/5001msec) 00:37:55.459 slat (nsec): min=5444, max=55005, avg=8562.96, stdev=2989.96 00:37:55.459 clat (usec): min=1062, max=6112, avg=2667.74, stdev=163.42 00:37:55.459 lat (usec): min=1068, max=6139, avg=2676.30, stdev=163.37 00:37:55.459 clat percentiles (usec): 00:37:55.459 | 1.00th=[ 2024], 5.00th=[ 2606], 10.00th=[ 2606], 20.00th=[ 2638], 00:37:55.459 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:37:55.459 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2704], 95.00th=[ 2737], 00:37:55.459 | 99.00th=[ 3032], 99.50th=[ 3785], 99.90th=[ 4047], 99.95th=[ 5800], 00:37:55.459 | 99.99th=[ 5866] 00:37:55.459 bw ( KiB/s): min=23584, max=24032, per=25.05%, avg=23813.33, stdev=114.26, samples=9 00:37:55.459 iops : min= 2948, max= 3004, avg=2976.67, stdev=14.28, samples=9 00:37:55.459 lat (msec) : 2=0.63%, 4=99.15%, 10=0.22% 00:37:55.459 cpu : usr=95.78%, sys=3.96%, ctx=5, majf=0, minf=81 00:37:55.459 IO depths : 1=0.1%, 2=0.1%, 4=73.6%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:55.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.459 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.459 issued rwts: total=14863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:55.459 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:55.459 filename1: (groupid=0, jobs=1): err= 0: pid=1594128: Wed Nov 20 16:32:30 2024 00:37:55.459 read: IOPS=2976, BW=23.3MiB/s (24.4MB/s)(116MiB/5001msec) 00:37:55.459 slat (nsec): min=7922, max=58665, avg=9304.64, stdev=3707.96 00:37:55.459 clat (usec): min=1573, max=5169, avg=2667.42, stdev=126.30 00:37:55.459 lat (usec): min=1582, max=5209, avg=2676.73, stdev=126.46 00:37:55.459 clat percentiles (usec): 00:37:55.459 | 1.00th=[ 2040], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2638], 00:37:55.459 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:37:55.459 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2704], 95.00th=[ 2737], 00:37:55.459 | 99.00th=[ 3032], 99.50th=[ 3163], 99.90th=[ 4228], 99.95th=[ 4424], 00:37:55.459 | 99.99th=[ 5145] 00:37:55.459 bw ( KiB/s): min=23680, max=24032, per=25.05%, avg=23815.11, stdev=101.23, samples=9 00:37:55.459 iops : min= 2960, max= 3004, avg=2976.89, stdev=12.65, samples=9 00:37:55.459 lat (msec) : 2=0.66%, 4=99.20%, 10=0.14% 00:37:55.459 cpu : usr=96.48%, sys=3.24%, ctx=5, majf=0, minf=129 00:37:55.459 IO depths : 1=0.1%, 2=0.1%, 4=65.5%, 8=34.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:55.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.459 complete : 0=0.0%, 4=97.8%, 8=2.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.459 issued rwts: total=14885,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:55.459 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:55.459 filename1: (groupid=0, jobs=1): err= 0: pid=1594129: Wed Nov 20 16:32:30 2024 00:37:55.459 read: IOPS=2953, BW=23.1MiB/s (24.2MB/s)(115MiB/5002msec) 00:37:55.459 slat (nsec): min=5430, max=78221, avg=8456.49, stdev=2453.25 00:37:55.459 clat (usec): min=1292, max=45199, avg=2685.59, stdev=997.39 00:37:55.459 lat (usec): min=1299, max=45246, avg=2694.04, stdev=997.63 00:37:55.459 clat percentiles (usec): 00:37:55.459 | 1.00th=[ 2147], 5.00th=[ 2573], 10.00th=[ 2638], 20.00th=[ 2638], 00:37:55.459 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:37:55.459 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2704], 95.00th=[ 2737], 00:37:55.459 | 99.00th=[ 3032], 99.50th=[ 3523], 99.90th=[ 4424], 99.95th=[45351], 00:37:55.459 | 99.99th=[45351] 00:37:55.459 bw ( KiB/s): min=21652, max=23888, per=24.80%, avg=23572.00, stdev=721.19, samples=9 00:37:55.459 iops : min= 2706, max= 2986, avg=2946.44, stdev=90.31, samples=9 00:37:55.459 lat (msec) : 2=0.58%, 4=99.29%, 10=0.08%, 50=0.05% 00:37:55.459 cpu : usr=96.42%, sys=3.30%, ctx=5, majf=0, minf=97 00:37:55.459 IO depths : 1=0.1%, 2=0.1%, 4=71.5%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:55.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.459 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.459 issued rwts: total=14775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:55.459 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:55.459 00:37:55.459 Run status group 0 (all jobs): 00:37:55.459 READ: bw=92.8MiB/s (97.3MB/s), 23.1MiB/s-23.3MiB/s (24.2MB/s-24.4MB/s), io=464MiB (487MB), run=5001-5002msec 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.459 00:37:55.459 real 0m24.626s 00:37:55.459 user 5m12.984s 00:37:55.459 sys 0m4.439s 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:55.459 16:32:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:55.459 ************************************ 00:37:55.459 END TEST fio_dif_rand_params 00:37:55.459 ************************************ 00:37:55.459 16:32:31 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:55.459 16:32:31 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:55.459 16:32:31 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:55.459 16:32:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:55.459 ************************************ 00:37:55.459 START TEST fio_dif_digest 00:37:55.459 ************************************ 00:37:55.459 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:37:55.459 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:55.459 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:55.460 bdev_null0 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:55.460 [2024-11-20 16:32:31.293928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:55.460 { 00:37:55.460 "params": { 00:37:55.460 "name": "Nvme$subsystem", 00:37:55.460 "trtype": "$TEST_TRANSPORT", 00:37:55.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:55.460 "adrfam": "ipv4", 00:37:55.460 "trsvcid": "$NVMF_PORT", 00:37:55.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:55.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:55.460 "hdgst": ${hdgst:-false}, 00:37:55.460 "ddgst": ${ddgst:-false} 00:37:55.460 }, 00:37:55.460 "method": "bdev_nvme_attach_controller" 00:37:55.460 } 00:37:55.460 EOF 00:37:55.460 )") 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:55.460 "params": { 00:37:55.460 "name": "Nvme0", 00:37:55.460 "trtype": "tcp", 00:37:55.460 "traddr": "10.0.0.2", 00:37:55.460 "adrfam": "ipv4", 00:37:55.460 "trsvcid": "4420", 00:37:55.460 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:55.460 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:55.460 "hdgst": true, 00:37:55.460 "ddgst": true 00:37:55.460 }, 00:37:55.460 "method": "bdev_nvme_attach_controller" 00:37:55.460 }' 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:55.460 16:32:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:56.066 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:56.066 ... 00:37:56.066 fio-3.35 00:37:56.066 Starting 3 threads 00:38:08.380 00:38:08.380 filename0: (groupid=0, jobs=1): err= 0: pid=1595648: Wed Nov 20 16:32:42 2024 00:38:08.380 read: IOPS=304, BW=38.1MiB/s (39.9MB/s)(382MiB/10045msec) 00:38:08.380 slat (nsec): min=5835, max=40462, avg=6642.97, stdev=1295.98 00:38:08.380 clat (usec): min=6281, max=52199, avg=9828.57, stdev=2233.82 00:38:08.380 lat (usec): min=6288, max=52206, avg=9835.21, stdev=2233.81 00:38:08.380 clat percentiles (usec): 00:38:08.380 | 1.00th=[ 7832], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:38:08.380 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:38:08.380 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 00:38:08.380 | 99.00th=[11600], 99.50th=[11994], 99.90th=[51119], 99.95th=[51643], 00:38:08.380 | 99.99th=[52167] 00:38:08.380 bw ( KiB/s): min=35328, max=40704, per=34.20%, avg=39129.60, stdev=1246.62, samples=20 00:38:08.380 iops : min= 276, max= 318, avg=305.70, stdev= 9.74, samples=20 00:38:08.380 lat (msec) : 10=65.12%, 20=34.62%, 50=0.03%, 100=0.23% 00:38:08.380 cpu : usr=94.06%, sys=5.68%, ctx=86, majf=0, minf=142 00:38:08.380 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:08.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:08.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:08.380 issued rwts: total=3059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:08.380 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:08.380 filename0: (groupid=0, jobs=1): err= 0: pid=1595649: Wed Nov 20 16:32:42 2024 00:38:08.380 read: IOPS=299, BW=37.4MiB/s (39.3MB/s)(376MiB/10044msec) 00:38:08.380 slat (nsec): min=5817, max=31123, avg=6651.00, stdev=967.21 00:38:08.380 clat (usec): min=6002, max=47170, avg=9995.09, stdev=1296.71 00:38:08.380 lat (usec): min=6008, max=47177, avg=10001.75, stdev=1296.73 00:38:08.380 clat percentiles (usec): 00:38:08.380 | 1.00th=[ 7439], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9372], 00:38:08.380 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:38:08.380 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[11338], 00:38:08.380 | 99.00th=[11994], 99.50th=[12256], 99.90th=[12780], 99.95th=[46924], 00:38:08.380 | 99.99th=[46924] 00:38:08.380 bw ( KiB/s): min=37632, max=40192, per=33.63%, avg=38476.80, stdev=724.55, samples=20 00:38:08.380 iops : min= 294, max= 314, avg=300.60, stdev= 5.66, samples=20 00:38:08.380 lat (msec) : 10=50.03%, 20=49.90%, 50=0.07% 00:38:08.380 cpu : usr=94.44%, sys=5.33%, ctx=21, majf=0, minf=139 00:38:08.380 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:08.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:08.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:08.380 issued rwts: total=3008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:08.380 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:08.380 filename0: (groupid=0, jobs=1): err= 0: pid=1595650: Wed Nov 20 16:32:42 2024 00:38:08.380 read: IOPS=290, BW=36.3MiB/s (38.0MB/s)(364MiB/10047msec) 00:38:08.380 slat (nsec): min=8333, max=35583, avg=9215.81, stdev=1066.92 00:38:08.380 clat (usec): min=6426, max=52778, avg=10317.71, stdev=2297.70 00:38:08.380 lat (usec): min=6434, max=52787, avg=10326.92, stdev=2297.70 00:38:08.380 clat percentiles (usec): 00:38:08.380 | 1.00th=[ 7963], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503], 00:38:08.380 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:38:08.380 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11731], 00:38:08.380 | 99.00th=[12387], 99.50th=[12649], 99.90th=[52167], 99.95th=[52167], 00:38:08.380 | 99.99th=[52691] 00:38:08.380 bw ( KiB/s): min=33792, max=38912, per=32.58%, avg=37273.60, stdev=1196.14, samples=20 00:38:08.380 iops : min= 264, max= 304, avg=291.20, stdev= 9.34, samples=20 00:38:08.380 lat (msec) : 10=40.67%, 20=59.06%, 50=0.07%, 100=0.21% 00:38:08.380 cpu : usr=94.17%, sys=5.57%, ctx=11, majf=0, minf=100 00:38:08.380 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:08.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:08.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:08.380 issued rwts: total=2914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:08.380 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:08.380 00:38:08.380 Run status group 0 (all jobs): 00:38:08.380 READ: bw=112MiB/s (117MB/s), 36.3MiB/s-38.1MiB/s (38.0MB/s-39.9MB/s), io=1123MiB (1177MB), run=10044-10047msec 00:38:08.380 16:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:38:08.380 16:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:38:08.380 16:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:38:08.380 16:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:08.380 16:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:38:08.380 16:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:08.380 16:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.380 16:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:08.380 16:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.380 16:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:08.380 16:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.380 16:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:08.380 16:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.380 00:38:08.380 real 0m11.267s 00:38:08.380 user 0m44.018s 00:38:08.380 sys 0m1.991s 00:38:08.380 16:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:08.380 16:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:08.380 ************************************ 00:38:08.380 END TEST fio_dif_digest 00:38:08.380 ************************************ 00:38:08.380 16:32:42 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:38:08.380 16:32:42 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:38:08.380 16:32:42 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:08.380 16:32:42 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:38:08.380 16:32:42 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:08.380 16:32:42 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:38:08.380 16:32:42 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:08.380 16:32:42 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:08.380 rmmod nvme_tcp 00:38:08.380 rmmod nvme_fabrics 00:38:08.380 rmmod nvme_keyring 00:38:08.380 16:32:42 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:08.380 16:32:42 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:38:08.380 16:32:42 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:38:08.380 16:32:42 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1585179 ']' 00:38:08.380 16:32:42 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1585179 00:38:08.380 16:32:42 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1585179 ']' 00:38:08.380 16:32:42 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1585179 00:38:08.380 16:32:42 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:38:08.380 16:32:42 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:08.380 16:32:42 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1585179 00:38:08.380 16:32:42 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:08.380 16:32:42 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:08.380 16:32:42 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1585179' 00:38:08.380 killing process with pid 1585179 00:38:08.381 16:32:42 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1585179 00:38:08.381 16:32:42 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1585179 00:38:08.381 16:32:42 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:08.381 16:32:42 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:10.298 Waiting for block devices as requested 00:38:10.298 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:10.561 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:10.561 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:10.561 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:10.822 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:10.822 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:10.822 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:11.083 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:11.083 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:11.344 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:11.344 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:11.344 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:11.606 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:11.606 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:11.606 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:11.606 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:11.867 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:12.129 16:32:47 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:12.129 16:32:47 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:12.129 16:32:47 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:38:12.129 16:32:47 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:38:12.129 16:32:47 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:12.129 16:32:47 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:38:12.129 16:32:47 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:12.129 16:32:47 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:12.129 16:32:47 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:12.129 16:32:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:12.129 16:32:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:14.676 16:32:49 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:14.676 00:38:14.676 real 1m18.623s 00:38:14.676 user 8m4.984s 00:38:14.676 sys 0m21.915s 00:38:14.676 16:32:50 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:14.676 16:32:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:14.676 ************************************ 00:38:14.676 END TEST nvmf_dif 00:38:14.676 ************************************ 00:38:14.676 16:32:50 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:14.676 16:32:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:14.676 16:32:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:14.676 16:32:50 -- common/autotest_common.sh@10 -- # set +x 00:38:14.676 ************************************ 00:38:14.676 START TEST nvmf_abort_qd_sizes 00:38:14.676 ************************************ 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:14.676 * Looking for test storage... 00:38:14.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:14.676 16:32:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:14.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.676 --rc genhtml_branch_coverage=1 00:38:14.676 --rc genhtml_function_coverage=1 00:38:14.676 --rc genhtml_legend=1 00:38:14.676 --rc geninfo_all_blocks=1 00:38:14.676 --rc geninfo_unexecuted_blocks=1 00:38:14.676 00:38:14.677 ' 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:14.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.677 --rc genhtml_branch_coverage=1 00:38:14.677 --rc genhtml_function_coverage=1 00:38:14.677 --rc genhtml_legend=1 00:38:14.677 --rc geninfo_all_blocks=1 00:38:14.677 --rc geninfo_unexecuted_blocks=1 00:38:14.677 00:38:14.677 ' 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:14.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.677 --rc genhtml_branch_coverage=1 00:38:14.677 --rc genhtml_function_coverage=1 00:38:14.677 --rc genhtml_legend=1 00:38:14.677 --rc geninfo_all_blocks=1 00:38:14.677 --rc geninfo_unexecuted_blocks=1 00:38:14.677 00:38:14.677 ' 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:14.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.677 --rc genhtml_branch_coverage=1 00:38:14.677 --rc genhtml_function_coverage=1 00:38:14.677 --rc genhtml_legend=1 00:38:14.677 --rc geninfo_all_blocks=1 00:38:14.677 --rc geninfo_unexecuted_blocks=1 00:38:14.677 00:38:14.677 ' 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:14.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:38:14.677 16:32:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:22.823 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:22.823 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:22.824 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:22.824 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:22.824 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:22.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:22.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:38:22.824 00:38:22.824 --- 10.0.0.2 ping statistics --- 00:38:22.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:22.824 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:22.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:22.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:38:22.824 00:38:22.824 --- 10.0.0.1 ping statistics --- 00:38:22.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:22.824 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:38:22.824 16:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:25.374 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:25.374 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:25.374 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:25.374 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:25.374 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:25.374 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:25.374 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:25.374 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:25.374 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:25.374 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:25.374 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:25.374 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:25.374 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:25.374 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:25.374 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:25.634 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:25.634 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:25.895 16:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:25.895 16:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:25.895 16:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:25.895 16:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:25.895 16:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:25.896 16:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:25.896 16:33:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:25.896 16:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:25.896 16:33:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:25.896 16:33:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:25.896 16:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1605164 00:38:25.896 16:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1605164 00:38:25.896 16:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:25.896 16:33:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1605164 ']' 00:38:25.896 16:33:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:25.896 16:33:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:25.896 16:33:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:25.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:25.896 16:33:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:25.896 16:33:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:25.896 [2024-11-20 16:33:01.810415] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:38:25.896 [2024-11-20 16:33:01.810485] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:26.157 [2024-11-20 16:33:01.886745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:26.157 [2024-11-20 16:33:01.935852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:26.157 [2024-11-20 16:33:01.935906] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:26.157 [2024-11-20 16:33:01.935913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:26.157 [2024-11-20 16:33:01.935919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:26.157 [2024-11-20 16:33:01.935924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:26.157 [2024-11-20 16:33:01.937735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:26.157 [2024-11-20 16:33:01.937869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:26.157 [2024-11-20 16:33:01.938241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:26.157 [2024-11-20 16:33:01.938242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:26.157 16:33:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:26.157 16:33:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:38:26.157 16:33:02 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:26.157 16:33:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:26.157 16:33:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:26.157 16:33:02 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:26.157 16:33:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:26.157 16:33:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:26.417 16:33:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:26.417 16:33:02 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:38:26.417 16:33:02 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:38:26.418 16:33:02 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:38:26.418 16:33:02 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:38:26.418 16:33:02 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:26.418 16:33:02 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:38:26.418 16:33:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:38:26.418 16:33:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:26.418 16:33:02 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:26.418 16:33:02 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:38:26.418 16:33:02 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:38:26.418 16:33:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:38:26.418 16:33:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:38:26.418 16:33:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:26.418 16:33:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:26.418 16:33:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:26.418 16:33:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:26.418 ************************************ 00:38:26.418 START TEST spdk_target_abort 00:38:26.418 ************************************ 00:38:26.418 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:38:26.418 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:26.418 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:38:26.418 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.418 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:26.678 spdk_targetn1 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:26.678 [2024-11-20 16:33:02.456459] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:26.678 [2024-11-20 16:33:02.508934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:26.678 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:26.679 16:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:26.938 [2024-11-20 16:33:02.654359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:40 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:38:26.938 [2024-11-20 16:33:02.654410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:38:26.938 [2024-11-20 16:33:02.667701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:384 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:38:26.938 [2024-11-20 16:33:02.667734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0031 p:1 m:0 dnr:0 00:38:26.938 [2024-11-20 16:33:02.676651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:656 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:38:26.938 [2024-11-20 16:33:02.676680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0055 p:1 m:0 dnr:0 00:38:26.938 [2024-11-20 16:33:02.702399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1432 len:8 PRP1 0x200004abe000 PRP2 0x0 00:38:26.938 [2024-11-20 16:33:02.702430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00b6 p:1 m:0 dnr:0 00:38:26.938 [2024-11-20 16:33:02.708733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1592 len:8 PRP1 0x200004abe000 PRP2 0x0 00:38:26.938 [2024-11-20 16:33:02.708767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00c9 p:1 m:0 dnr:0 00:38:26.938 [2024-11-20 16:33:02.733610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2344 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:38:26.938 [2024-11-20 16:33:02.733641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:38:26.938 [2024-11-20 16:33:02.783180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2640 len:8 PRP1 0x200004abe000 PRP2 0x0 00:38:26.938 [2024-11-20 16:33:02.783210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:26.938 [2024-11-20 16:33:02.783428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2648 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:38:26.938 [2024-11-20 16:33:02.783442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:26.938 [2024-11-20 16:33:02.828682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3960 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:38:26.938 [2024-11-20 16:33:02.828705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00f1 p:0 m:0 dnr:0 00:38:30.230 Initializing NVMe Controllers 00:38:30.230 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:30.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:30.230 Initialization complete. Launching workers. 00:38:30.230 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10190, failed: 9 00:38:30.230 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2177, failed to submit 8022 00:38:30.230 success 742, unsuccessful 1435, failed 0 00:38:30.230 16:33:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:30.230 16:33:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:30.230 [2024-11-20 16:33:05.906335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:832 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:38:30.230 [2024-11-20 16:33:05.906375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:0077 p:1 m:0 dnr:0 00:38:30.230 [2024-11-20 16:33:05.930304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:1376 len:8 PRP1 0x200004e56000 PRP2 0x0 00:38:30.230 [2024-11-20 16:33:05.930329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:00b8 p:1 m:0 dnr:0 00:38:30.230 [2024-11-20 16:33:05.978391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:2544 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:38:30.230 [2024-11-20 16:33:05.978414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:38:30.230 [2024-11-20 16:33:06.026178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:3664 len:8 PRP1 0x200004e3e000 PRP2 0x0 00:38:30.230 [2024-11-20 16:33:06.026200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:00cc p:0 m:0 dnr:0 00:38:33.527 Initializing NVMe Controllers 00:38:33.527 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:33.527 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:33.527 Initialization complete. Launching workers. 00:38:33.527 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8594, failed: 4 00:38:33.527 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1215, failed to submit 7383 00:38:33.527 success 376, unsuccessful 839, failed 0 00:38:33.527 16:33:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:33.527 16:33:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:33.527 [2024-11-20 16:33:09.154662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:151 nsid:1 lba:3768 len:8 PRP1 0x200004afa000 PRP2 0x0 00:38:33.527 [2024-11-20 16:33:09.154691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:151 cdw0:0 sqhd:00ba p:0 m:0 dnr:0 00:38:35.444 [2024-11-20 16:33:11.315412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:151 nsid:1 lba:254472 len:8 PRP1 0x200004b0e000 PRP2 0x0 00:38:35.444 [2024-11-20 16:33:11.315449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:151 cdw0:0 sqhd:001a p:1 m:0 dnr:0 00:38:36.387 Initializing NVMe Controllers 00:38:36.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:36.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:36.387 Initialization complete. Launching workers. 00:38:36.387 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43736, failed: 2 00:38:36.387 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2714, failed to submit 41024 00:38:36.387 success 610, unsuccessful 2104, failed 0 00:38:36.387 16:33:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:36.387 16:33:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.387 16:33:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:36.387 16:33:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.387 16:33:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:36.387 16:33:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.387 16:33:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:38.297 16:33:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.297 16:33:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1605164 00:38:38.297 16:33:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1605164 ']' 00:38:38.297 16:33:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1605164 00:38:38.297 16:33:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:38:38.297 16:33:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:38.297 16:33:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1605164 00:38:38.297 16:33:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:38.297 16:33:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:38.297 16:33:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1605164' 00:38:38.297 killing process with pid 1605164 00:38:38.297 16:33:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1605164 00:38:38.297 16:33:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1605164 00:38:38.297 00:38:38.297 real 0m12.035s 00:38:38.297 user 0m46.887s 00:38:38.297 sys 0m1.944s 00:38:38.297 16:33:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:38.297 16:33:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:38.297 ************************************ 00:38:38.297 END TEST spdk_target_abort 00:38:38.297 ************************************ 00:38:38.297 16:33:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:38.297 16:33:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:38.297 16:33:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:38.297 16:33:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:38.557 ************************************ 00:38:38.557 START TEST kernel_target_abort 00:38:38.557 ************************************ 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:38.557 16:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:41.858 Waiting for block devices as requested 00:38:41.858 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:41.858 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:41.858 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:42.118 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:42.118 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:42.118 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:42.379 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:42.379 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:42.379 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:42.640 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:42.640 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:42.900 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:42.900 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:42.900 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:43.160 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:43.160 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:43.160 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:43.421 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:43.421 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:43.421 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:38:43.421 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:38:43.421 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:43.421 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:43.421 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:38:43.421 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:43.421 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:43.683 No valid GPT data, bailing 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:38:43.683 00:38:43.683 Discovery Log Number of Records 2, Generation counter 2 00:38:43.683 =====Discovery Log Entry 0====== 00:38:43.683 trtype: tcp 00:38:43.683 adrfam: ipv4 00:38:43.683 subtype: current discovery subsystem 00:38:43.683 treq: not specified, sq flow control disable supported 00:38:43.683 portid: 1 00:38:43.683 trsvcid: 4420 00:38:43.683 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:43.683 traddr: 10.0.0.1 00:38:43.683 eflags: none 00:38:43.683 sectype: none 00:38:43.683 =====Discovery Log Entry 1====== 00:38:43.683 trtype: tcp 00:38:43.683 adrfam: ipv4 00:38:43.683 subtype: nvme subsystem 00:38:43.683 treq: not specified, sq flow control disable supported 00:38:43.683 portid: 1 00:38:43.683 trsvcid: 4420 00:38:43.683 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:43.683 traddr: 10.0.0.1 00:38:43.683 eflags: none 00:38:43.683 sectype: none 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:43.683 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:43.684 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:43.684 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:43.684 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:43.684 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:43.684 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:43.684 16:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:46.983 Initializing NVMe Controllers 00:38:46.983 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:46.983 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:46.983 Initialization complete. Launching workers. 00:38:46.983 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67936, failed: 0 00:38:46.983 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67936, failed to submit 0 00:38:46.983 success 0, unsuccessful 67936, failed 0 00:38:46.983 16:33:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:46.983 16:33:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:50.283 Initializing NVMe Controllers 00:38:50.283 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:50.283 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:50.283 Initialization complete. Launching workers. 00:38:50.283 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 120607, failed: 0 00:38:50.283 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30354, failed to submit 90253 00:38:50.283 success 0, unsuccessful 30354, failed 0 00:38:50.283 16:33:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:50.283 16:33:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:53.576 Initializing NVMe Controllers 00:38:53.576 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:53.576 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:53.576 Initialization complete. Launching workers. 00:38:53.576 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145731, failed: 0 00:38:53.576 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36482, failed to submit 109249 00:38:53.576 success 0, unsuccessful 36482, failed 0 00:38:53.576 16:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:53.576 16:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:53.576 16:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:38:53.576 16:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:53.576 16:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:53.576 16:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:53.576 16:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:53.576 16:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:53.576 16:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:53.576 16:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:56.877 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:56.877 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:56.877 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:56.877 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:56.877 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:56.877 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:56.877 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:56.877 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:56.877 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:56.877 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:56.877 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:56.877 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:56.877 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:56.877 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:56.877 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:56.877 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:58.261 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:58.832 00:38:58.832 real 0m20.292s 00:38:58.832 user 0m9.891s 00:38:58.832 sys 0m6.025s 00:38:58.832 16:33:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:58.832 16:33:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:58.832 ************************************ 00:38:58.832 END TEST kernel_target_abort 00:38:58.832 ************************************ 00:38:58.832 16:33:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:58.832 16:33:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:58.832 16:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:58.832 16:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:58.832 16:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:58.832 16:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:58.832 16:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:58.832 16:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:58.832 rmmod nvme_tcp 00:38:58.832 rmmod nvme_fabrics 00:38:58.832 rmmod nvme_keyring 00:38:58.832 16:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:58.832 16:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:58.832 16:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:58.832 16:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1605164 ']' 00:38:58.832 16:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1605164 00:38:58.832 16:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1605164 ']' 00:38:58.832 16:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1605164 00:38:58.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1605164) - No such process 00:38:58.833 16:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1605164 is not found' 00:38:58.833 Process with pid 1605164 is not found 00:38:58.833 16:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:58.833 16:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:02.134 Waiting for block devices as requested 00:39:02.134 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:02.134 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:02.397 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:02.397 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:02.397 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:02.659 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:02.659 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:02.659 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:02.920 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:02.920 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:03.181 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:03.181 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:03.181 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:03.442 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:03.442 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:03.442 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:03.703 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:03.964 16:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:03.964 16:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:03.964 16:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:39:03.964 16:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:39:03.964 16:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:03.964 16:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:39:03.964 16:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:03.964 16:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:03.964 16:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:03.964 16:33:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:03.964 16:33:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:05.875 16:33:41 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:06.136 00:39:06.136 real 0m51.725s 00:39:06.136 user 1m2.141s 00:39:06.136 sys 0m19.060s 00:39:06.136 16:33:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:06.136 16:33:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:06.136 ************************************ 00:39:06.136 END TEST nvmf_abort_qd_sizes 00:39:06.136 ************************************ 00:39:06.136 16:33:41 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:06.136 16:33:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:06.136 16:33:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:06.136 16:33:41 -- common/autotest_common.sh@10 -- # set +x 00:39:06.136 ************************************ 00:39:06.136 START TEST keyring_file 00:39:06.136 ************************************ 00:39:06.136 16:33:41 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:06.136 * Looking for test storage... 00:39:06.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:06.136 16:33:42 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:06.136 16:33:42 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:39:06.136 16:33:42 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:06.403 16:33:42 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@345 -- # : 1 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@353 -- # local d=1 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@355 -- # echo 1 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@353 -- # local d=2 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@355 -- # echo 2 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:06.403 16:33:42 keyring_file -- scripts/common.sh@368 -- # return 0 00:39:06.403 16:33:42 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:06.404 16:33:42 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:06.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.404 --rc genhtml_branch_coverage=1 00:39:06.404 --rc genhtml_function_coverage=1 00:39:06.404 --rc genhtml_legend=1 00:39:06.404 --rc geninfo_all_blocks=1 00:39:06.404 --rc geninfo_unexecuted_blocks=1 00:39:06.404 00:39:06.404 ' 00:39:06.404 16:33:42 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:06.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.404 --rc genhtml_branch_coverage=1 00:39:06.404 --rc genhtml_function_coverage=1 00:39:06.404 --rc genhtml_legend=1 00:39:06.404 --rc geninfo_all_blocks=1 00:39:06.404 --rc geninfo_unexecuted_blocks=1 00:39:06.404 00:39:06.404 ' 00:39:06.404 16:33:42 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:06.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.404 --rc genhtml_branch_coverage=1 00:39:06.404 --rc genhtml_function_coverage=1 00:39:06.404 --rc genhtml_legend=1 00:39:06.404 --rc geninfo_all_blocks=1 00:39:06.404 --rc geninfo_unexecuted_blocks=1 00:39:06.404 00:39:06.404 ' 00:39:06.404 16:33:42 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:06.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.404 --rc genhtml_branch_coverage=1 00:39:06.404 --rc genhtml_function_coverage=1 00:39:06.404 --rc genhtml_legend=1 00:39:06.404 --rc geninfo_all_blocks=1 00:39:06.404 --rc geninfo_unexecuted_blocks=1 00:39:06.404 00:39:06.404 ' 00:39:06.404 16:33:42 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:06.404 16:33:42 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:06.404 16:33:42 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:39:06.404 16:33:42 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:06.404 16:33:42 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:06.404 16:33:42 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:06.404 16:33:42 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.404 16:33:42 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.404 16:33:42 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.404 16:33:42 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:06.404 16:33:42 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@51 -- # : 0 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:06.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:06.404 16:33:42 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:06.404 16:33:42 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:06.404 16:33:42 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:06.404 16:33:42 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:06.404 16:33:42 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:06.404 16:33:42 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:06.404 16:33:42 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:06.404 16:33:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:06.404 16:33:42 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:06.404 16:33:42 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:06.404 16:33:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:06.404 16:33:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:06.404 16:33:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AtPhUplPIp 00:39:06.404 16:33:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:06.404 16:33:42 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:06.405 16:33:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AtPhUplPIp 00:39:06.405 16:33:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AtPhUplPIp 00:39:06.405 16:33:42 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.AtPhUplPIp 00:39:06.405 16:33:42 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:06.405 16:33:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:06.405 16:33:42 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:06.405 16:33:42 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:06.405 16:33:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:06.405 16:33:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:06.405 16:33:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.fj76IQL0Im 00:39:06.405 16:33:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:06.405 16:33:42 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:06.405 16:33:42 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:06.405 16:33:42 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:06.405 16:33:42 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:06.405 16:33:42 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:06.405 16:33:42 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:06.405 16:33:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fj76IQL0Im 00:39:06.405 16:33:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.fj76IQL0Im 00:39:06.405 16:33:42 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.fj76IQL0Im 00:39:06.405 16:33:42 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:06.405 16:33:42 keyring_file -- keyring/file.sh@30 -- # tgtpid=1615848 00:39:06.405 16:33:42 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1615848 00:39:06.405 16:33:42 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1615848 ']' 00:39:06.405 16:33:42 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:06.405 16:33:42 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:06.405 16:33:42 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:06.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:06.405 16:33:42 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:06.405 16:33:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:06.405 [2024-11-20 16:33:42.287468] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:39:06.405 [2024-11-20 16:33:42.287523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615848 ] 00:39:06.722 [2024-11-20 16:33:42.375530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:06.722 [2024-11-20 16:33:42.413122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:07.414 16:33:43 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:07.414 16:33:43 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:07.414 16:33:43 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:07.414 16:33:43 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:07.414 16:33:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:07.414 [2024-11-20 16:33:43.091036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:07.414 null0 00:39:07.414 [2024-11-20 16:33:43.123084] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:07.414 [2024-11-20 16:33:43.123384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:07.414 16:33:43 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:07.414 16:33:43 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:07.414 16:33:43 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:07.414 16:33:43 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:07.414 16:33:43 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:39:07.414 16:33:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:07.414 16:33:43 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:39:07.414 16:33:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:07.414 16:33:43 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:07.414 16:33:43 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:07.414 16:33:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:07.414 [2024-11-20 16:33:43.155166] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:07.414 request: 00:39:07.414 { 00:39:07.414 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:07.414 "secure_channel": false, 00:39:07.414 "listen_address": { 00:39:07.414 "trtype": "tcp", 00:39:07.414 "traddr": "127.0.0.1", 00:39:07.414 "trsvcid": "4420" 00:39:07.414 }, 00:39:07.414 "method": "nvmf_subsystem_add_listener", 00:39:07.414 "req_id": 1 00:39:07.414 } 00:39:07.414 Got JSON-RPC error response 00:39:07.414 response: 00:39:07.414 { 00:39:07.414 "code": -32602, 00:39:07.414 "message": "Invalid parameters" 00:39:07.414 } 00:39:07.414 16:33:43 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:07.414 16:33:43 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:07.414 16:33:43 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:07.414 16:33:43 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:07.415 16:33:43 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:07.415 16:33:43 keyring_file -- keyring/file.sh@47 -- # bperfpid=1615896 00:39:07.415 16:33:43 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1615896 /var/tmp/bperf.sock 00:39:07.415 16:33:43 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1615896 ']' 00:39:07.415 16:33:43 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:07.415 16:33:43 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:07.415 16:33:43 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:07.415 16:33:43 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:07.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:07.415 16:33:43 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:07.415 16:33:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:07.415 [2024-11-20 16:33:43.218614] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:39:07.415 [2024-11-20 16:33:43.218663] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615896 ] 00:39:07.415 [2024-11-20 16:33:43.304738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:07.415 [2024-11-20 16:33:43.341589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:08.355 16:33:44 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:08.355 16:33:44 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:08.355 16:33:44 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AtPhUplPIp 00:39:08.356 16:33:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AtPhUplPIp 00:39:08.356 16:33:44 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.fj76IQL0Im 00:39:08.356 16:33:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.fj76IQL0Im 00:39:08.616 16:33:44 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:39:08.616 16:33:44 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:08.616 16:33:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:08.616 16:33:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:08.616 16:33:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:08.877 16:33:44 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.AtPhUplPIp == \/\t\m\p\/\t\m\p\.\A\t\P\h\U\p\l\P\I\p ]] 00:39:08.877 16:33:44 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:39:08.877 16:33:44 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:39:08.877 16:33:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:08.877 16:33:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:08.877 16:33:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:08.877 16:33:44 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.fj76IQL0Im == \/\t\m\p\/\t\m\p\.\f\j\7\6\I\Q\L\0\I\m ]] 00:39:08.877 16:33:44 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:39:08.877 16:33:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:08.877 16:33:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:08.877 16:33:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:08.877 16:33:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:08.877 16:33:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:09.136 16:33:44 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:09.136 16:33:44 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:39:09.136 16:33:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:09.136 16:33:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:09.136 16:33:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:09.136 16:33:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:09.136 16:33:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:09.396 16:33:45 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:39:09.396 16:33:45 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:09.396 16:33:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:09.396 [2024-11-20 16:33:45.288743] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:09.655 nvme0n1 00:39:09.655 16:33:45 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:39:09.655 16:33:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:09.656 16:33:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:09.656 16:33:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:09.656 16:33:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:09.656 16:33:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:09.656 16:33:45 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:39:09.656 16:33:45 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:39:09.656 16:33:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:09.656 16:33:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:09.656 16:33:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:09.656 16:33:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:09.656 16:33:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:09.915 16:33:45 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:39:09.915 16:33:45 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:09.915 Running I/O for 1 seconds... 00:39:11.296 19346.00 IOPS, 75.57 MiB/s 00:39:11.296 Latency(us) 00:39:11.296 [2024-11-20T15:33:47.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:11.296 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:11.296 nvme0n1 : 1.00 19401.82 75.79 0.00 0.00 6586.12 2252.80 15510.19 00:39:11.296 [2024-11-20T15:33:47.232Z] =================================================================================================================== 00:39:11.296 [2024-11-20T15:33:47.233Z] Total : 19401.82 75.79 0.00 0.00 6586.12 2252.80 15510.19 00:39:11.297 { 00:39:11.297 "results": [ 00:39:11.297 { 00:39:11.297 "job": "nvme0n1", 00:39:11.297 "core_mask": "0x2", 00:39:11.297 "workload": "randrw", 00:39:11.297 "percentage": 50, 00:39:11.297 "status": "finished", 00:39:11.297 "queue_depth": 128, 00:39:11.297 "io_size": 4096, 00:39:11.297 "runtime": 1.003772, 00:39:11.297 "iops": 19401.81634873258, 00:39:11.297 "mibps": 75.78834511223664, 00:39:11.297 "io_failed": 0, 00:39:11.297 "io_timeout": 0, 00:39:11.297 "avg_latency_us": 6586.122976123235, 00:39:11.297 "min_latency_us": 2252.8, 00:39:11.297 "max_latency_us": 15510.186666666666 00:39:11.297 } 00:39:11.297 ], 00:39:11.297 "core_count": 1 00:39:11.297 } 00:39:11.297 16:33:46 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:11.297 16:33:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:11.297 16:33:47 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:39:11.297 16:33:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:11.297 16:33:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:11.297 16:33:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:11.297 16:33:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:11.297 16:33:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:11.557 16:33:47 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:11.557 16:33:47 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:39:11.557 16:33:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:11.557 16:33:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:11.557 16:33:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:11.557 16:33:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:11.557 16:33:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:11.557 16:33:47 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:39:11.557 16:33:47 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:11.557 16:33:47 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:11.557 16:33:47 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:11.557 16:33:47 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:11.557 16:33:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:11.557 16:33:47 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:11.557 16:33:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:11.557 16:33:47 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:11.557 16:33:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:11.817 [2024-11-20 16:33:47.586142] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:11.817 [2024-11-20 16:33:47.586611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfae740 (107): Transport endpoint is not connected 00:39:11.817 [2024-11-20 16:33:47.587607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfae740 (9): Bad file descriptor 00:39:11.817 [2024-11-20 16:33:47.588609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:11.817 [2024-11-20 16:33:47.588621] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:11.817 [2024-11-20 16:33:47.588627] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:11.817 [2024-11-20 16:33:47.588634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:11.817 request: 00:39:11.817 { 00:39:11.817 "name": "nvme0", 00:39:11.817 "trtype": "tcp", 00:39:11.817 "traddr": "127.0.0.1", 00:39:11.817 "adrfam": "ipv4", 00:39:11.817 "trsvcid": "4420", 00:39:11.817 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:11.817 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:11.817 "prchk_reftag": false, 00:39:11.817 "prchk_guard": false, 00:39:11.817 "hdgst": false, 00:39:11.817 "ddgst": false, 00:39:11.817 "psk": "key1", 00:39:11.817 "allow_unrecognized_csi": false, 00:39:11.817 "method": "bdev_nvme_attach_controller", 00:39:11.817 "req_id": 1 00:39:11.817 } 00:39:11.817 Got JSON-RPC error response 00:39:11.817 response: 00:39:11.817 { 00:39:11.817 "code": -5, 00:39:11.817 "message": "Input/output error" 00:39:11.817 } 00:39:11.817 16:33:47 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:11.817 16:33:47 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:11.817 16:33:47 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:11.817 16:33:47 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:11.817 16:33:47 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:39:11.817 16:33:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:11.817 16:33:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:11.817 16:33:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:11.817 16:33:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:11.817 16:33:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:12.077 16:33:47 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:12.077 16:33:47 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:39:12.077 16:33:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:12.077 16:33:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:12.077 16:33:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:12.077 16:33:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:12.077 16:33:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:12.077 16:33:47 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:39:12.077 16:33:47 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:39:12.077 16:33:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:12.338 16:33:48 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:39:12.338 16:33:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:12.598 16:33:48 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:39:12.598 16:33:48 keyring_file -- keyring/file.sh@78 -- # jq length 00:39:12.598 16:33:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:12.598 16:33:48 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:39:12.598 16:33:48 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.AtPhUplPIp 00:39:12.598 16:33:48 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.AtPhUplPIp 00:39:12.598 16:33:48 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:12.598 16:33:48 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.AtPhUplPIp 00:39:12.598 16:33:48 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:12.598 16:33:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:12.598 16:33:48 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:12.598 16:33:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:12.598 16:33:48 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AtPhUplPIp 00:39:12.598 16:33:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AtPhUplPIp 00:39:12.858 [2024-11-20 16:33:48.626312] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.AtPhUplPIp': 0100660 00:39:12.858 [2024-11-20 16:33:48.626330] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:12.858 request: 00:39:12.858 { 00:39:12.858 "name": "key0", 00:39:12.858 "path": "/tmp/tmp.AtPhUplPIp", 00:39:12.858 "method": "keyring_file_add_key", 00:39:12.858 "req_id": 1 00:39:12.858 } 00:39:12.858 Got JSON-RPC error response 00:39:12.858 response: 00:39:12.858 { 00:39:12.858 "code": -1, 00:39:12.858 "message": "Operation not permitted" 00:39:12.858 } 00:39:12.858 16:33:48 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:12.858 16:33:48 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:12.858 16:33:48 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:12.858 16:33:48 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:12.858 16:33:48 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.AtPhUplPIp 00:39:12.858 16:33:48 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AtPhUplPIp 00:39:12.858 16:33:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AtPhUplPIp 00:39:13.118 16:33:48 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.AtPhUplPIp 00:39:13.118 16:33:48 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:39:13.118 16:33:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:13.118 16:33:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:13.118 16:33:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:13.118 16:33:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:13.118 16:33:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:13.118 16:33:49 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:39:13.118 16:33:49 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:13.118 16:33:49 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:13.118 16:33:49 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:13.118 16:33:49 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:13.118 16:33:49 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:13.118 16:33:49 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:13.118 16:33:49 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:13.118 16:33:49 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:13.118 16:33:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:13.378 [2024-11-20 16:33:49.191741] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.AtPhUplPIp': No such file or directory 00:39:13.378 [2024-11-20 16:33:49.191754] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:13.378 [2024-11-20 16:33:49.191768] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:13.378 [2024-11-20 16:33:49.191773] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:39:13.378 [2024-11-20 16:33:49.191778] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:13.378 [2024-11-20 16:33:49.191783] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:13.378 request: 00:39:13.378 { 00:39:13.378 "name": "nvme0", 00:39:13.378 "trtype": "tcp", 00:39:13.378 "traddr": "127.0.0.1", 00:39:13.378 "adrfam": "ipv4", 00:39:13.378 "trsvcid": "4420", 00:39:13.378 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:13.378 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:13.378 "prchk_reftag": false, 00:39:13.378 "prchk_guard": false, 00:39:13.378 "hdgst": false, 00:39:13.378 "ddgst": false, 00:39:13.378 "psk": "key0", 00:39:13.378 "allow_unrecognized_csi": false, 00:39:13.378 "method": "bdev_nvme_attach_controller", 00:39:13.378 "req_id": 1 00:39:13.378 } 00:39:13.378 Got JSON-RPC error response 00:39:13.378 response: 00:39:13.378 { 00:39:13.378 "code": -19, 00:39:13.378 "message": "No such device" 00:39:13.378 } 00:39:13.378 16:33:49 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:13.378 16:33:49 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:13.378 16:33:49 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:13.378 16:33:49 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:13.378 16:33:49 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:39:13.378 16:33:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:13.637 16:33:49 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:13.637 16:33:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:13.637 16:33:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:13.637 16:33:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:13.637 16:33:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:13.637 16:33:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:13.637 16:33:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.G3yWndYHFa 00:39:13.637 16:33:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:13.638 16:33:49 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:13.638 16:33:49 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:13.638 16:33:49 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:13.638 16:33:49 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:13.638 16:33:49 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:13.638 16:33:49 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:13.638 16:33:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.G3yWndYHFa 00:39:13.638 16:33:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.G3yWndYHFa 00:39:13.638 16:33:49 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.G3yWndYHFa 00:39:13.638 16:33:49 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.G3yWndYHFa 00:39:13.638 16:33:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.G3yWndYHFa 00:39:13.897 16:33:49 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:13.897 16:33:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:13.897 nvme0n1 00:39:14.157 16:33:49 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:39:14.157 16:33:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:14.157 16:33:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:14.157 16:33:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:14.157 16:33:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:14.157 16:33:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:14.157 16:33:50 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:39:14.157 16:33:50 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:39:14.157 16:33:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:14.418 16:33:50 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:39:14.418 16:33:50 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:39:14.418 16:33:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:14.418 16:33:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:14.418 16:33:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:14.677 16:33:50 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:39:14.677 16:33:50 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:39:14.677 16:33:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:14.677 16:33:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:14.677 16:33:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:14.677 16:33:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:14.677 16:33:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:14.677 16:33:50 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:39:14.677 16:33:50 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:14.677 16:33:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:14.937 16:33:50 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:39:14.937 16:33:50 keyring_file -- keyring/file.sh@105 -- # jq length 00:39:14.937 16:33:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:15.198 16:33:50 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:39:15.198 16:33:50 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.G3yWndYHFa 00:39:15.198 16:33:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.G3yWndYHFa 00:39:15.198 16:33:51 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.fj76IQL0Im 00:39:15.198 16:33:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.fj76IQL0Im 00:39:15.457 16:33:51 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:15.457 16:33:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:15.717 nvme0n1 00:39:15.717 16:33:51 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:39:15.718 16:33:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:15.979 16:33:51 keyring_file -- keyring/file.sh@113 -- # config='{ 00:39:15.979 "subsystems": [ 00:39:15.979 { 00:39:15.979 "subsystem": "keyring", 00:39:15.979 "config": [ 00:39:15.979 { 00:39:15.979 "method": "keyring_file_add_key", 00:39:15.979 "params": { 00:39:15.979 "name": "key0", 00:39:15.979 "path": "/tmp/tmp.G3yWndYHFa" 00:39:15.979 } 00:39:15.979 }, 00:39:15.979 { 00:39:15.979 "method": "keyring_file_add_key", 00:39:15.979 "params": { 00:39:15.979 "name": "key1", 00:39:15.979 "path": "/tmp/tmp.fj76IQL0Im" 00:39:15.979 } 00:39:15.979 } 00:39:15.979 ] 00:39:15.979 }, 00:39:15.979 { 00:39:15.979 "subsystem": "iobuf", 00:39:15.979 "config": [ 00:39:15.979 { 00:39:15.979 "method": "iobuf_set_options", 00:39:15.979 "params": { 00:39:15.979 "small_pool_count": 8192, 00:39:15.979 "large_pool_count": 1024, 00:39:15.979 "small_bufsize": 8192, 00:39:15.979 "large_bufsize": 135168, 00:39:15.979 "enable_numa": false 00:39:15.979 } 00:39:15.979 } 00:39:15.979 ] 00:39:15.979 }, 00:39:15.979 { 00:39:15.979 "subsystem": "sock", 00:39:15.979 "config": [ 00:39:15.979 { 00:39:15.979 "method": "sock_set_default_impl", 00:39:15.979 "params": { 00:39:15.979 "impl_name": "posix" 00:39:15.979 } 00:39:15.979 }, 00:39:15.979 { 00:39:15.979 "method": "sock_impl_set_options", 00:39:15.979 "params": { 00:39:15.979 "impl_name": "ssl", 00:39:15.979 "recv_buf_size": 4096, 00:39:15.979 "send_buf_size": 4096, 00:39:15.979 "enable_recv_pipe": true, 00:39:15.979 "enable_quickack": false, 00:39:15.979 "enable_placement_id": 0, 00:39:15.979 "enable_zerocopy_send_server": true, 00:39:15.979 "enable_zerocopy_send_client": false, 00:39:15.979 "zerocopy_threshold": 0, 00:39:15.979 "tls_version": 0, 00:39:15.979 "enable_ktls": false 00:39:15.979 } 00:39:15.979 }, 00:39:15.979 { 00:39:15.979 "method": "sock_impl_set_options", 00:39:15.979 "params": { 00:39:15.979 "impl_name": "posix", 00:39:15.979 "recv_buf_size": 2097152, 00:39:15.979 "send_buf_size": 2097152, 00:39:15.979 "enable_recv_pipe": true, 00:39:15.979 "enable_quickack": false, 00:39:15.979 "enable_placement_id": 0, 00:39:15.979 "enable_zerocopy_send_server": true, 00:39:15.979 "enable_zerocopy_send_client": false, 00:39:15.979 "zerocopy_threshold": 0, 00:39:15.979 "tls_version": 0, 00:39:15.979 "enable_ktls": false 00:39:15.979 } 00:39:15.979 } 00:39:15.979 ] 00:39:15.979 }, 00:39:15.979 { 00:39:15.979 "subsystem": "vmd", 00:39:15.979 "config": [] 00:39:15.979 }, 00:39:15.979 { 00:39:15.979 "subsystem": "accel", 00:39:15.979 "config": [ 00:39:15.979 { 00:39:15.979 "method": "accel_set_options", 00:39:15.979 "params": { 00:39:15.979 "small_cache_size": 128, 00:39:15.979 "large_cache_size": 16, 00:39:15.979 "task_count": 2048, 00:39:15.979 "sequence_count": 2048, 00:39:15.979 "buf_count": 2048 00:39:15.979 } 00:39:15.979 } 00:39:15.979 ] 00:39:15.979 }, 00:39:15.979 { 00:39:15.979 "subsystem": "bdev", 00:39:15.979 "config": [ 00:39:15.979 { 00:39:15.979 "method": "bdev_set_options", 00:39:15.980 "params": { 00:39:15.980 "bdev_io_pool_size": 65535, 00:39:15.980 "bdev_io_cache_size": 256, 00:39:15.980 "bdev_auto_examine": true, 00:39:15.980 "iobuf_small_cache_size": 128, 00:39:15.980 "iobuf_large_cache_size": 16 00:39:15.980 } 00:39:15.980 }, 00:39:15.980 { 00:39:15.980 "method": "bdev_raid_set_options", 00:39:15.980 "params": { 00:39:15.980 "process_window_size_kb": 1024, 00:39:15.980 "process_max_bandwidth_mb_sec": 0 00:39:15.980 } 00:39:15.980 }, 00:39:15.980 { 00:39:15.980 "method": "bdev_iscsi_set_options", 00:39:15.980 "params": { 00:39:15.980 "timeout_sec": 30 00:39:15.980 } 00:39:15.980 }, 00:39:15.980 { 00:39:15.980 "method": "bdev_nvme_set_options", 00:39:15.980 "params": { 00:39:15.980 "action_on_timeout": "none", 00:39:15.980 "timeout_us": 0, 00:39:15.980 "timeout_admin_us": 0, 00:39:15.980 "keep_alive_timeout_ms": 10000, 00:39:15.980 "arbitration_burst": 0, 00:39:15.980 "low_priority_weight": 0, 00:39:15.980 "medium_priority_weight": 0, 00:39:15.980 "high_priority_weight": 0, 00:39:15.980 "nvme_adminq_poll_period_us": 10000, 00:39:15.980 "nvme_ioq_poll_period_us": 0, 00:39:15.980 "io_queue_requests": 512, 00:39:15.980 "delay_cmd_submit": true, 00:39:15.980 "transport_retry_count": 4, 00:39:15.980 "bdev_retry_count": 3, 00:39:15.980 "transport_ack_timeout": 0, 00:39:15.980 "ctrlr_loss_timeout_sec": 0, 00:39:15.980 "reconnect_delay_sec": 0, 00:39:15.980 "fast_io_fail_timeout_sec": 0, 00:39:15.980 "disable_auto_failback": false, 00:39:15.980 "generate_uuids": false, 00:39:15.980 "transport_tos": 0, 00:39:15.980 "nvme_error_stat": false, 00:39:15.980 "rdma_srq_size": 0, 00:39:15.980 "io_path_stat": false, 00:39:15.980 "allow_accel_sequence": false, 00:39:15.980 "rdma_max_cq_size": 0, 00:39:15.980 "rdma_cm_event_timeout_ms": 0, 00:39:15.980 "dhchap_digests": [ 00:39:15.980 "sha256", 00:39:15.980 "sha384", 00:39:15.980 "sha512" 00:39:15.980 ], 00:39:15.980 "dhchap_dhgroups": [ 00:39:15.980 "null", 00:39:15.980 "ffdhe2048", 00:39:15.980 "ffdhe3072", 00:39:15.980 "ffdhe4096", 00:39:15.980 "ffdhe6144", 00:39:15.980 "ffdhe8192" 00:39:15.980 ] 00:39:15.980 } 00:39:15.980 }, 00:39:15.980 { 00:39:15.980 "method": "bdev_nvme_attach_controller", 00:39:15.980 "params": { 00:39:15.980 "name": "nvme0", 00:39:15.980 "trtype": "TCP", 00:39:15.980 "adrfam": "IPv4", 00:39:15.980 "traddr": "127.0.0.1", 00:39:15.980 "trsvcid": "4420", 00:39:15.980 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:15.980 "prchk_reftag": false, 00:39:15.980 "prchk_guard": false, 00:39:15.980 "ctrlr_loss_timeout_sec": 0, 00:39:15.980 "reconnect_delay_sec": 0, 00:39:15.980 "fast_io_fail_timeout_sec": 0, 00:39:15.980 "psk": "key0", 00:39:15.980 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:15.980 "hdgst": false, 00:39:15.980 "ddgst": false, 00:39:15.980 "multipath": "multipath" 00:39:15.980 } 00:39:15.980 }, 00:39:15.980 { 00:39:15.980 "method": "bdev_nvme_set_hotplug", 00:39:15.980 "params": { 00:39:15.980 "period_us": 100000, 00:39:15.980 "enable": false 00:39:15.980 } 00:39:15.980 }, 00:39:15.980 { 00:39:15.980 "method": "bdev_wait_for_examine" 00:39:15.980 } 00:39:15.980 ] 00:39:15.980 }, 00:39:15.980 { 00:39:15.980 "subsystem": "nbd", 00:39:15.980 "config": [] 00:39:15.980 } 00:39:15.980 ] 00:39:15.980 }' 00:39:15.980 16:33:51 keyring_file -- keyring/file.sh@115 -- # killprocess 1615896 00:39:15.980 16:33:51 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1615896 ']' 00:39:15.980 16:33:51 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1615896 00:39:15.980 16:33:51 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:15.980 16:33:51 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:15.980 16:33:51 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1615896 00:39:15.980 16:33:51 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:15.980 16:33:51 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:15.980 16:33:51 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1615896' 00:39:15.980 killing process with pid 1615896 00:39:15.980 16:33:51 keyring_file -- common/autotest_common.sh@973 -- # kill 1615896 00:39:15.980 Received shutdown signal, test time was about 1.000000 seconds 00:39:15.980 00:39:15.980 Latency(us) 00:39:15.980 [2024-11-20T15:33:51.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:15.980 [2024-11-20T15:33:51.916Z] =================================================================================================================== 00:39:15.980 [2024-11-20T15:33:51.916Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:15.980 16:33:51 keyring_file -- common/autotest_common.sh@978 -- # wait 1615896 00:39:15.980 16:33:51 keyring_file -- keyring/file.sh@118 -- # bperfpid=1617711 00:39:15.980 16:33:51 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1617711 /var/tmp/bperf.sock 00:39:15.980 16:33:51 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1617711 ']' 00:39:15.980 16:33:51 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:15.980 16:33:51 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:15.980 16:33:51 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:15.980 16:33:51 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:15.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:15.980 16:33:51 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:15.980 16:33:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:15.980 16:33:51 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:39:15.980 "subsystems": [ 00:39:15.980 { 00:39:15.980 "subsystem": "keyring", 00:39:15.980 "config": [ 00:39:15.980 { 00:39:15.980 "method": "keyring_file_add_key", 00:39:15.980 "params": { 00:39:15.980 "name": "key0", 00:39:15.980 "path": "/tmp/tmp.G3yWndYHFa" 00:39:15.980 } 00:39:15.980 }, 00:39:15.980 { 00:39:15.980 "method": "keyring_file_add_key", 00:39:15.980 "params": { 00:39:15.980 "name": "key1", 00:39:15.980 "path": "/tmp/tmp.fj76IQL0Im" 00:39:15.980 } 00:39:15.980 } 00:39:15.980 ] 00:39:15.980 }, 00:39:15.980 { 00:39:15.980 "subsystem": "iobuf", 00:39:15.980 "config": [ 00:39:15.980 { 00:39:15.980 "method": "iobuf_set_options", 00:39:15.980 "params": { 00:39:15.980 "small_pool_count": 8192, 00:39:15.980 "large_pool_count": 1024, 00:39:15.980 "small_bufsize": 8192, 00:39:15.980 "large_bufsize": 135168, 00:39:15.980 "enable_numa": false 00:39:15.980 } 00:39:15.980 } 00:39:15.980 ] 00:39:15.980 }, 00:39:15.980 { 00:39:15.980 "subsystem": "sock", 00:39:15.980 "config": [ 00:39:15.980 { 00:39:15.980 "method": "sock_set_default_impl", 00:39:15.980 "params": { 00:39:15.980 "impl_name": "posix" 00:39:15.980 } 00:39:15.980 }, 00:39:15.980 { 00:39:15.980 "method": "sock_impl_set_options", 00:39:15.980 "params": { 00:39:15.980 "impl_name": "ssl", 00:39:15.980 "recv_buf_size": 4096, 00:39:15.980 "send_buf_size": 4096, 00:39:15.980 "enable_recv_pipe": true, 00:39:15.980 "enable_quickack": false, 00:39:15.980 "enable_placement_id": 0, 00:39:15.980 "enable_zerocopy_send_server": true, 00:39:15.980 "enable_zerocopy_send_client": false, 00:39:15.980 "zerocopy_threshold": 0, 00:39:15.980 "tls_version": 0, 00:39:15.980 "enable_ktls": false 00:39:15.980 } 00:39:15.980 }, 00:39:15.980 { 00:39:15.980 "method": "sock_impl_set_options", 00:39:15.980 "params": { 00:39:15.980 "impl_name": "posix", 00:39:15.980 "recv_buf_size": 2097152, 00:39:15.980 "send_buf_size": 2097152, 00:39:15.980 "enable_recv_pipe": true, 00:39:15.980 "enable_quickack": false, 00:39:15.980 "enable_placement_id": 0, 00:39:15.980 "enable_zerocopy_send_server": true, 00:39:15.980 "enable_zerocopy_send_client": false, 00:39:15.980 "zerocopy_threshold": 0, 00:39:15.980 "tls_version": 0, 00:39:15.980 "enable_ktls": false 00:39:15.980 } 00:39:15.980 } 00:39:15.980 ] 00:39:15.980 }, 00:39:15.980 { 00:39:15.981 "subsystem": "vmd", 00:39:15.981 "config": [] 00:39:15.981 }, 00:39:15.981 { 00:39:15.981 "subsystem": "accel", 00:39:15.981 "config": [ 00:39:15.981 { 00:39:15.981 "method": "accel_set_options", 00:39:15.981 "params": { 00:39:15.981 "small_cache_size": 128, 00:39:15.981 "large_cache_size": 16, 00:39:15.981 "task_count": 2048, 00:39:15.981 "sequence_count": 2048, 00:39:15.981 "buf_count": 2048 00:39:15.981 } 00:39:15.981 } 00:39:15.981 ] 00:39:15.981 }, 00:39:15.981 { 00:39:15.981 "subsystem": "bdev", 00:39:15.981 "config": [ 00:39:15.981 { 00:39:15.981 "method": "bdev_set_options", 00:39:15.981 "params": { 00:39:15.981 "bdev_io_pool_size": 65535, 00:39:15.981 "bdev_io_cache_size": 256, 00:39:15.981 "bdev_auto_examine": true, 00:39:15.981 "iobuf_small_cache_size": 128, 00:39:15.981 "iobuf_large_cache_size": 16 00:39:15.981 } 00:39:15.981 }, 00:39:15.981 { 00:39:15.981 "method": "bdev_raid_set_options", 00:39:15.981 "params": { 00:39:15.981 "process_window_size_kb": 1024, 00:39:15.981 "process_max_bandwidth_mb_sec": 0 00:39:15.981 } 00:39:15.981 }, 00:39:15.981 { 00:39:15.981 "method": "bdev_iscsi_set_options", 00:39:15.981 "params": { 00:39:15.981 "timeout_sec": 30 00:39:15.981 } 00:39:15.981 }, 00:39:15.981 { 00:39:15.981 "method": "bdev_nvme_set_options", 00:39:15.981 "params": { 00:39:15.981 "action_on_timeout": "none", 00:39:15.981 "timeout_us": 0, 00:39:15.981 "timeout_admin_us": 0, 00:39:15.981 "keep_alive_timeout_ms": 10000, 00:39:15.981 "arbitration_burst": 0, 00:39:15.981 "low_priority_weight": 0, 00:39:15.981 "medium_priority_weight": 0, 00:39:15.981 "high_priority_weight": 0, 00:39:15.981 "nvme_adminq_poll_period_us": 10000, 00:39:15.981 "nvme_ioq_poll_period_us": 0, 00:39:15.981 "io_queue_requests": 512, 00:39:15.981 "delay_cmd_submit": true, 00:39:15.981 "transport_retry_count": 4, 00:39:15.981 "bdev_retry_count": 3, 00:39:15.981 "transport_ack_timeout": 0, 00:39:15.981 "ctrlr_loss_timeout_sec": 0, 00:39:15.981 "reconnect_delay_sec": 0, 00:39:15.981 "fast_io_fail_timeout_sec": 0, 00:39:15.981 "disable_auto_failback": false, 00:39:15.981 "generate_uuids": false, 00:39:15.981 "transport_tos": 0, 00:39:15.981 "nvme_error_stat": false, 00:39:15.981 "rdma_srq_size": 0, 00:39:15.981 "io_path_stat": false, 00:39:15.981 "allow_accel_sequence": false, 00:39:15.981 "rdma_max_cq_size": 0, 00:39:15.981 "rdma_cm_event_timeout_ms": 0, 00:39:15.981 "dhchap_digests": [ 00:39:15.981 "sha256", 00:39:15.981 "sha384", 00:39:15.981 "sha512" 00:39:15.981 ], 00:39:15.981 "dhchap_dhgroups": [ 00:39:15.981 "null", 00:39:15.981 "ffdhe2048", 00:39:15.981 "ffdhe3072", 00:39:15.981 "ffdhe4096", 00:39:15.981 "ffdhe6144", 00:39:15.981 "ffdhe8192" 00:39:15.981 ] 00:39:15.981 } 00:39:15.981 }, 00:39:15.981 { 00:39:15.981 "method": "bdev_nvme_attach_controller", 00:39:15.981 "params": { 00:39:15.981 "name": "nvme0", 00:39:15.981 "trtype": "TCP", 00:39:15.981 "adrfam": "IPv4", 00:39:15.981 "traddr": "127.0.0.1", 00:39:15.981 "trsvcid": "4420", 00:39:15.981 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:15.981 "prchk_reftag": false, 00:39:15.981 "prchk_guard": false, 00:39:15.981 "ctrlr_loss_timeout_sec": 0, 00:39:15.981 "reconnect_delay_sec": 0, 00:39:15.981 "fast_io_fail_timeout_sec": 0, 00:39:15.981 "psk": "key0", 00:39:15.981 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:15.981 "hdgst": false, 00:39:15.981 "ddgst": false, 00:39:15.981 "multipath": "multipath" 00:39:15.981 } 00:39:15.981 }, 00:39:15.981 { 00:39:15.981 "method": "bdev_nvme_set_hotplug", 00:39:15.981 "params": { 00:39:15.981 "period_us": 100000, 00:39:15.981 "enable": false 00:39:15.981 } 00:39:15.981 }, 00:39:15.981 { 00:39:15.981 "method": "bdev_wait_for_examine" 00:39:15.981 } 00:39:15.981 ] 00:39:15.981 }, 00:39:15.981 { 00:39:15.981 "subsystem": "nbd", 00:39:15.981 "config": [] 00:39:15.981 } 00:39:15.981 ] 00:39:15.981 }' 00:39:16.242 [2024-11-20 16:33:51.943618] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:39:16.242 [2024-11-20 16:33:51.943673] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1617711 ] 00:39:16.242 [2024-11-20 16:33:52.025263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:16.242 [2024-11-20 16:33:52.053191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:16.502 [2024-11-20 16:33:52.195840] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:17.079 16:33:52 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:17.079 16:33:52 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:17.079 16:33:52 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:39:17.079 16:33:52 keyring_file -- keyring/file.sh@121 -- # jq length 00:39:17.079 16:33:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:17.079 16:33:52 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:17.079 16:33:52 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:39:17.079 16:33:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:17.079 16:33:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:17.079 16:33:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:17.079 16:33:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:17.079 16:33:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:17.339 16:33:53 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:39:17.339 16:33:53 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:39:17.339 16:33:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:17.339 16:33:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:17.339 16:33:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:17.339 16:33:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:17.339 16:33:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:17.339 16:33:53 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:39:17.339 16:33:53 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:39:17.339 16:33:53 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:39:17.339 16:33:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:17.601 16:33:53 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:39:17.601 16:33:53 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:17.601 16:33:53 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.G3yWndYHFa /tmp/tmp.fj76IQL0Im 00:39:17.601 16:33:53 keyring_file -- keyring/file.sh@20 -- # killprocess 1617711 00:39:17.601 16:33:53 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1617711 ']' 00:39:17.601 16:33:53 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1617711 00:39:17.601 16:33:53 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:17.601 16:33:53 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:17.601 16:33:53 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1617711 00:39:17.601 16:33:53 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:17.601 16:33:53 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:17.601 16:33:53 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1617711' 00:39:17.601 killing process with pid 1617711 00:39:17.601 16:33:53 keyring_file -- common/autotest_common.sh@973 -- # kill 1617711 00:39:17.601 Received shutdown signal, test time was about 1.000000 seconds 00:39:17.601 00:39:17.601 Latency(us) 00:39:17.601 [2024-11-20T15:33:53.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:17.601 [2024-11-20T15:33:53.537Z] =================================================================================================================== 00:39:17.601 [2024-11-20T15:33:53.537Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:17.601 16:33:53 keyring_file -- common/autotest_common.sh@978 -- # wait 1617711 00:39:17.861 16:33:53 keyring_file -- keyring/file.sh@21 -- # killprocess 1615848 00:39:17.861 16:33:53 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1615848 ']' 00:39:17.861 16:33:53 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1615848 00:39:17.861 16:33:53 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:17.861 16:33:53 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:17.861 16:33:53 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1615848 00:39:17.861 16:33:53 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:17.861 16:33:53 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:17.861 16:33:53 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1615848' 00:39:17.861 killing process with pid 1615848 00:39:17.861 16:33:53 keyring_file -- common/autotest_common.sh@973 -- # kill 1615848 00:39:17.861 16:33:53 keyring_file -- common/autotest_common.sh@978 -- # wait 1615848 00:39:18.123 00:39:18.123 real 0m11.955s 00:39:18.123 user 0m28.931s 00:39:18.123 sys 0m2.635s 00:39:18.123 16:33:53 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:18.123 16:33:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:18.123 ************************************ 00:39:18.123 END TEST keyring_file 00:39:18.123 ************************************ 00:39:18.123 16:33:53 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:39:18.123 16:33:53 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:18.123 16:33:53 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:18.123 16:33:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:18.123 16:33:53 -- common/autotest_common.sh@10 -- # set +x 00:39:18.123 ************************************ 00:39:18.123 START TEST keyring_linux 00:39:18.123 ************************************ 00:39:18.123 16:33:53 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:18.123 Joined session keyring: 763011768 00:39:18.123 * Looking for test storage... 00:39:18.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:18.123 16:33:54 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:18.123 16:33:54 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:39:18.123 16:33:54 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:18.385 16:33:54 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@345 -- # : 1 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:18.385 16:33:54 keyring_linux -- scripts/common.sh@368 -- # return 0 00:39:18.385 16:33:54 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:18.385 16:33:54 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:18.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.385 --rc genhtml_branch_coverage=1 00:39:18.385 --rc genhtml_function_coverage=1 00:39:18.385 --rc genhtml_legend=1 00:39:18.385 --rc geninfo_all_blocks=1 00:39:18.385 --rc geninfo_unexecuted_blocks=1 00:39:18.385 00:39:18.385 ' 00:39:18.385 16:33:54 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:18.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.385 --rc genhtml_branch_coverage=1 00:39:18.385 --rc genhtml_function_coverage=1 00:39:18.385 --rc genhtml_legend=1 00:39:18.385 --rc geninfo_all_blocks=1 00:39:18.385 --rc geninfo_unexecuted_blocks=1 00:39:18.385 00:39:18.385 ' 00:39:18.385 16:33:54 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:18.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.386 --rc genhtml_branch_coverage=1 00:39:18.386 --rc genhtml_function_coverage=1 00:39:18.386 --rc genhtml_legend=1 00:39:18.386 --rc geninfo_all_blocks=1 00:39:18.386 --rc geninfo_unexecuted_blocks=1 00:39:18.386 00:39:18.386 ' 00:39:18.386 16:33:54 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:18.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.386 --rc genhtml_branch_coverage=1 00:39:18.386 --rc genhtml_function_coverage=1 00:39:18.386 --rc genhtml_legend=1 00:39:18.386 --rc geninfo_all_blocks=1 00:39:18.386 --rc geninfo_unexecuted_blocks=1 00:39:18.386 00:39:18.386 ' 00:39:18.386 16:33:54 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:18.386 16:33:54 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:18.386 16:33:54 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:39:18.386 16:33:54 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:18.386 16:33:54 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:18.386 16:33:54 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:18.386 16:33:54 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.386 16:33:54 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.386 16:33:54 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.386 16:33:54 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:18.386 16:33:54 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:18.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:18.386 16:33:54 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:18.386 16:33:54 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:18.386 16:33:54 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:18.386 16:33:54 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:18.386 16:33:54 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:18.386 16:33:54 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:18.386 16:33:54 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:18.386 16:33:54 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:18.386 16:33:54 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:18.386 16:33:54 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:18.386 16:33:54 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:18.386 16:33:54 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:18.386 16:33:54 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:18.386 16:33:54 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:18.386 16:33:54 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:18.386 /tmp/:spdk-test:key0 00:39:18.386 16:33:54 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:18.386 16:33:54 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:18.386 16:33:54 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:18.386 16:33:54 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:18.386 16:33:54 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:18.386 16:33:54 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:18.386 16:33:54 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:18.386 16:33:54 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:18.386 16:33:54 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:18.386 16:33:54 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:18.386 /tmp/:spdk-test:key1 00:39:18.386 16:33:54 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1618150 00:39:18.386 16:33:54 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1618150 00:39:18.386 16:33:54 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:18.386 16:33:54 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1618150 ']' 00:39:18.386 16:33:54 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:18.386 16:33:54 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:18.386 16:33:54 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:18.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:18.386 16:33:54 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:18.386 16:33:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:18.648 [2024-11-20 16:33:54.320466] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:39:18.648 [2024-11-20 16:33:54.320541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1618150 ] 00:39:18.648 [2024-11-20 16:33:54.410589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:18.649 [2024-11-20 16:33:54.446406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:19.220 16:33:55 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:19.220 16:33:55 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:19.220 16:33:55 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:19.220 16:33:55 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.220 16:33:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:19.220 [2024-11-20 16:33:55.107890] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:19.220 null0 00:39:19.220 [2024-11-20 16:33:55.139948] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:19.220 [2024-11-20 16:33:55.140305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:19.482 16:33:55 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.482 16:33:55 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:19.482 560101993 00:39:19.482 16:33:55 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:19.482 879581528 00:39:19.482 16:33:55 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1618483 00:39:19.482 16:33:55 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1618483 /var/tmp/bperf.sock 00:39:19.482 16:33:55 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:19.482 16:33:55 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1618483 ']' 00:39:19.482 16:33:55 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:19.482 16:33:55 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:19.482 16:33:55 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:19.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:19.482 16:33:55 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:19.482 16:33:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:19.482 [2024-11-20 16:33:55.216572] Starting SPDK v25.01-pre git sha1 d3dfde872 / DPDK 24.03.0 initialization... 00:39:19.482 [2024-11-20 16:33:55.216620] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1618483 ] 00:39:19.482 [2024-11-20 16:33:55.299758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:19.482 [2024-11-20 16:33:55.329267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:20.428 16:33:56 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:20.428 16:33:56 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:20.428 16:33:56 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:20.428 16:33:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:20.428 16:33:56 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:20.428 16:33:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:20.689 16:33:56 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:20.689 16:33:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:20.689 [2024-11-20 16:33:56.569417] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:20.951 nvme0n1 00:39:20.951 16:33:56 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:20.951 16:33:56 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:20.951 16:33:56 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:20.951 16:33:56 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:20.951 16:33:56 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:20.951 16:33:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:20.951 16:33:56 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:20.951 16:33:56 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:20.951 16:33:56 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:20.952 16:33:56 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:20.952 16:33:56 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:20.952 16:33:56 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:20.952 16:33:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:21.213 16:33:57 keyring_linux -- keyring/linux.sh@25 -- # sn=560101993 00:39:21.213 16:33:57 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:21.213 16:33:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:21.213 16:33:57 keyring_linux -- keyring/linux.sh@26 -- # [[ 560101993 == \5\6\0\1\0\1\9\9\3 ]] 00:39:21.213 16:33:57 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 560101993 00:39:21.213 16:33:57 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:21.213 16:33:57 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:21.213 Running I/O for 1 seconds... 00:39:22.598 24377.00 IOPS, 95.22 MiB/s 00:39:22.598 Latency(us) 00:39:22.598 [2024-11-20T15:33:58.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:22.598 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:22.598 nvme0n1 : 1.01 24377.51 95.22 0.00 0.00 5235.49 4341.76 14636.37 00:39:22.598 [2024-11-20T15:33:58.534Z] =================================================================================================================== 00:39:22.598 [2024-11-20T15:33:58.534Z] Total : 24377.51 95.22 0.00 0.00 5235.49 4341.76 14636.37 00:39:22.598 { 00:39:22.598 "results": [ 00:39:22.598 { 00:39:22.598 "job": "nvme0n1", 00:39:22.598 "core_mask": "0x2", 00:39:22.598 "workload": "randread", 00:39:22.598 "status": "finished", 00:39:22.598 "queue_depth": 128, 00:39:22.598 "io_size": 4096, 00:39:22.598 "runtime": 1.00523, 00:39:22.598 "iops": 24377.50564547417, 00:39:22.598 "mibps": 95.22463142763348, 00:39:22.598 "io_failed": 0, 00:39:22.598 "io_timeout": 0, 00:39:22.598 "avg_latency_us": 5235.485860572672, 00:39:22.598 "min_latency_us": 4341.76, 00:39:22.598 "max_latency_us": 14636.373333333333 00:39:22.598 } 00:39:22.598 ], 00:39:22.598 "core_count": 1 00:39:22.598 } 00:39:22.598 16:33:58 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:22.598 16:33:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:22.598 16:33:58 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:22.598 16:33:58 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:22.598 16:33:58 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:22.598 16:33:58 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:22.598 16:33:58 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:22.598 16:33:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:22.598 16:33:58 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:22.598 16:33:58 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:22.598 16:33:58 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:22.598 16:33:58 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:22.598 16:33:58 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:39:22.598 16:33:58 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:22.598 16:33:58 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:22.598 16:33:58 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:22.598 16:33:58 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:22.598 16:33:58 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:22.598 16:33:58 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:22.598 16:33:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:22.859 [2024-11-20 16:33:58.649208] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:22.859 [2024-11-20 16:33:58.649449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194bba0 (107): Transport endpoint is not connected 00:39:22.859 [2024-11-20 16:33:58.650446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194bba0 (9): Bad file descriptor 00:39:22.859 [2024-11-20 16:33:58.651448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:22.859 [2024-11-20 16:33:58.651456] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:22.859 [2024-11-20 16:33:58.651462] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:22.859 [2024-11-20 16:33:58.651468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:22.859 request: 00:39:22.859 { 00:39:22.859 "name": "nvme0", 00:39:22.859 "trtype": "tcp", 00:39:22.859 "traddr": "127.0.0.1", 00:39:22.859 "adrfam": "ipv4", 00:39:22.859 "trsvcid": "4420", 00:39:22.859 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:22.859 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:22.859 "prchk_reftag": false, 00:39:22.859 "prchk_guard": false, 00:39:22.859 "hdgst": false, 00:39:22.859 "ddgst": false, 00:39:22.859 "psk": ":spdk-test:key1", 00:39:22.859 "allow_unrecognized_csi": false, 00:39:22.859 "method": "bdev_nvme_attach_controller", 00:39:22.859 "req_id": 1 00:39:22.859 } 00:39:22.859 Got JSON-RPC error response 00:39:22.859 response: 00:39:22.859 { 00:39:22.859 "code": -5, 00:39:22.859 "message": "Input/output error" 00:39:22.859 } 00:39:22.859 16:33:58 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:39:22.859 16:33:58 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:22.859 16:33:58 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:22.859 16:33:58 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:22.859 16:33:58 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:22.859 16:33:58 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:22.859 16:33:58 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:22.859 16:33:58 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:22.859 16:33:58 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:22.859 16:33:58 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:22.859 16:33:58 keyring_linux -- keyring/linux.sh@33 -- # sn=560101993 00:39:22.859 16:33:58 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 560101993 00:39:22.859 1 links removed 00:39:22.859 16:33:58 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:22.859 16:33:58 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:22.859 16:33:58 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:22.859 16:33:58 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:22.859 16:33:58 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:22.859 16:33:58 keyring_linux -- keyring/linux.sh@33 -- # sn=879581528 00:39:22.859 16:33:58 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 879581528 00:39:22.859 1 links removed 00:39:22.859 16:33:58 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1618483 00:39:22.859 16:33:58 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1618483 ']' 00:39:22.859 16:33:58 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1618483 00:39:22.859 16:33:58 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:22.859 16:33:58 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:22.859 16:33:58 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1618483 00:39:22.859 16:33:58 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:22.859 16:33:58 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:22.859 16:33:58 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1618483' 00:39:22.859 killing process with pid 1618483 00:39:22.859 16:33:58 keyring_linux -- common/autotest_common.sh@973 -- # kill 1618483 00:39:22.859 Received shutdown signal, test time was about 1.000000 seconds 00:39:22.859 00:39:22.859 Latency(us) 00:39:22.859 [2024-11-20T15:33:58.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:22.859 [2024-11-20T15:33:58.795Z] =================================================================================================================== 00:39:22.859 [2024-11-20T15:33:58.795Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:22.859 16:33:58 keyring_linux -- common/autotest_common.sh@978 -- # wait 1618483 00:39:23.121 16:33:58 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1618150 00:39:23.121 16:33:58 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1618150 ']' 00:39:23.121 16:33:58 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1618150 00:39:23.121 16:33:58 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:23.121 16:33:58 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:23.121 16:33:58 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1618150 00:39:23.121 16:33:58 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:23.121 16:33:58 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:23.121 16:33:58 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1618150' 00:39:23.121 killing process with pid 1618150 00:39:23.121 16:33:58 keyring_linux -- common/autotest_common.sh@973 -- # kill 1618150 00:39:23.121 16:33:58 keyring_linux -- common/autotest_common.sh@978 -- # wait 1618150 00:39:23.382 00:39:23.382 real 0m5.174s 00:39:23.382 user 0m9.603s 00:39:23.382 sys 0m1.458s 00:39:23.382 16:33:59 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:23.382 16:33:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:23.382 ************************************ 00:39:23.382 END TEST keyring_linux 00:39:23.382 ************************************ 00:39:23.382 16:33:59 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:39:23.382 16:33:59 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:39:23.382 16:33:59 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:39:23.382 16:33:59 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:39:23.382 16:33:59 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:39:23.382 16:33:59 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:39:23.382 16:33:59 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:39:23.382 16:33:59 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:39:23.382 16:33:59 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:39:23.382 16:33:59 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:39:23.382 16:33:59 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:39:23.382 16:33:59 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:39:23.382 16:33:59 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:39:23.382 16:33:59 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:39:23.382 16:33:59 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:39:23.382 16:33:59 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:39:23.382 16:33:59 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:39:23.382 16:33:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:23.382 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:39:23.382 16:33:59 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:39:23.382 16:33:59 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:39:23.382 16:33:59 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:39:23.382 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:39:31.521 INFO: APP EXITING 00:39:31.521 INFO: killing all VMs 00:39:31.521 INFO: killing vhost app 00:39:31.521 WARN: no vhost pid file found 00:39:31.521 INFO: EXIT DONE 00:39:34.070 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:39:34.070 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:39:34.070 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:39:34.070 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:39:34.331 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:39:34.331 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:39:34.331 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:39:34.331 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:39:34.331 0000:65:00.0 (144d a80a): Already using the nvme driver 00:39:34.331 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:39:34.331 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:39:34.331 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:39:34.331 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:39:34.331 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:39:34.592 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:39:34.592 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:39:34.592 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:39:38.800 Cleaning 00:39:38.800 Removing: /var/run/dpdk/spdk0/config 00:39:38.800 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:38.800 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:38.800 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:38.800 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:38.800 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:38.800 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:38.800 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:38.800 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:38.800 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:38.800 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:38.800 Removing: /var/run/dpdk/spdk1/config 00:39:38.800 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:38.800 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:38.800 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:38.800 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:38.800 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:38.800 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:38.800 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:38.800 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:38.800 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:38.800 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:38.800 Removing: /var/run/dpdk/spdk2/config 00:39:38.800 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:38.800 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:38.800 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:38.800 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:38.800 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:38.800 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:38.800 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:38.800 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:38.800 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:38.800 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:38.800 Removing: /var/run/dpdk/spdk3/config 00:39:38.800 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:38.800 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:38.800 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:38.800 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:38.800 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:38.800 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:38.800 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:38.800 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:38.800 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:38.800 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:38.800 Removing: /var/run/dpdk/spdk4/config 00:39:38.800 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:38.800 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:38.801 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:38.801 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:38.801 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:38.801 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:38.801 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:38.801 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:38.801 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:38.801 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:38.801 Removing: /dev/shm/bdev_svc_trace.1 00:39:38.801 Removing: /dev/shm/nvmf_trace.0 00:39:38.801 Removing: /dev/shm/spdk_tgt_trace.pid1040927 00:39:38.801 Removing: /var/run/dpdk/spdk0 00:39:38.801 Removing: /var/run/dpdk/spdk1 00:39:38.801 Removing: /var/run/dpdk/spdk2 00:39:38.801 Removing: /var/run/dpdk/spdk3 00:39:38.801 Removing: /var/run/dpdk/spdk4 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1039430 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1040927 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1041769 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1042811 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1043156 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1044219 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1044405 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1044694 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1045832 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1046483 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1046842 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1047210 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1047662 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1048008 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1048368 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1048716 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1049012 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1050639 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1054057 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1054387 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1054666 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1054972 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1055354 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1055663 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1056055 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1056082 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1056436 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1056764 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1056812 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1057144 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1057588 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1057939 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1058318 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1062867 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1068253 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1080062 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1080879 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1086050 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1086410 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1091485 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1098542 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1102239 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1114995 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1125826 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1127999 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1129177 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1149853 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1154718 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1211705 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1218096 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1225282 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1233182 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1233184 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1234190 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1235192 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1236198 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1236871 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1236876 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1237208 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1237233 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1237333 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1238391 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1239405 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1240469 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1241078 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1241197 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1241422 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1242791 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1244094 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1254110 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1288545 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1293944 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1296051 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1298638 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1298890 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1299225 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1299575 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1300292 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1302631 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1303719 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1304245 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1306813 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1307605 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1308556 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1313430 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1320015 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1320016 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1320017 00:39:38.801 Removing: /var/run/dpdk/spdk_pid1324695 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1334944 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1339758 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1347556 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1349041 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1350754 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1352428 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1358126 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1363276 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1368305 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1377400 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1377426 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1382628 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1382845 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1383128 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1383727 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1383787 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1389181 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1389998 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1395204 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1398533 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1405589 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1412367 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1422608 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1431284 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1431290 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1454650 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1455487 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1456308 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1457075 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1458134 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1458819 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1459499 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1460186 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1465313 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1465585 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1472918 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1473058 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1479572 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1484750 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1496220 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1496923 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1502195 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1502606 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1508131 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1514911 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1517988 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1530141 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1540752 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1542623 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1543758 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1564006 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1568704 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1571893 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1579481 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1579545 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1585553 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1587750 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1590107 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1591455 00:39:39.062 Removing: /var/run/dpdk/spdk_pid1593964 00:39:39.324 Removing: /var/run/dpdk/spdk_pid1595196 00:39:39.324 Removing: /var/run/dpdk/spdk_pid1605249 00:39:39.324 Removing: /var/run/dpdk/spdk_pid1605894 00:39:39.324 Removing: /var/run/dpdk/spdk_pid1606592 00:39:39.324 Removing: /var/run/dpdk/spdk_pid1609930 00:39:39.324 Removing: /var/run/dpdk/spdk_pid1610378 00:39:39.324 Removing: /var/run/dpdk/spdk_pid1610989 00:39:39.324 Removing: /var/run/dpdk/spdk_pid1615848 00:39:39.324 Removing: /var/run/dpdk/spdk_pid1615896 00:39:39.324 Removing: /var/run/dpdk/spdk_pid1617711 00:39:39.324 Removing: /var/run/dpdk/spdk_pid1618150 00:39:39.324 Removing: /var/run/dpdk/spdk_pid1618483 00:39:39.324 Clean 00:39:39.324 16:34:15 -- common/autotest_common.sh@1453 -- # return 0 00:39:39.324 16:34:15 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:39:39.324 16:34:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:39.324 16:34:15 -- common/autotest_common.sh@10 -- # set +x 00:39:39.324 16:34:15 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:39:39.324 16:34:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:39.324 16:34:15 -- common/autotest_common.sh@10 -- # set +x 00:39:39.324 16:34:15 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:39.324 16:34:15 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:39.324 16:34:15 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:39.324 16:34:15 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:39:39.324 16:34:15 -- spdk/autotest.sh@398 -- # hostname 00:39:39.324 16:34:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:39.585 geninfo: WARNING: invalid characters removed from testname! 00:40:06.173 16:34:41 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:08.722 16:34:44 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:10.108 16:34:45 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:12.021 16:34:47 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:13.935 16:34:49 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:15.847 16:34:51 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:17.231 16:34:53 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:17.231 16:34:53 -- spdk/autorun.sh@1 -- $ timing_finish 00:40:17.231 16:34:53 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:40:17.231 16:34:53 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:17.231 16:34:53 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:40:17.231 16:34:53 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:17.492 + [[ -n 953991 ]] 00:40:17.492 + sudo kill 953991 00:40:17.503 [Pipeline] } 00:40:17.518 [Pipeline] // stage 00:40:17.524 [Pipeline] } 00:40:17.538 [Pipeline] // timeout 00:40:17.543 [Pipeline] } 00:40:17.558 [Pipeline] // catchError 00:40:17.564 [Pipeline] } 00:40:17.582 [Pipeline] // wrap 00:40:17.587 [Pipeline] } 00:40:17.601 [Pipeline] // catchError 00:40:17.609 [Pipeline] stage 00:40:17.612 [Pipeline] { (Epilogue) 00:40:17.624 [Pipeline] catchError 00:40:17.626 [Pipeline] { 00:40:17.640 [Pipeline] echo 00:40:17.642 Cleanup processes 00:40:17.649 [Pipeline] sh 00:40:17.975 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:17.975 1631456 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:18.032 [Pipeline] sh 00:40:18.335 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:18.335 ++ grep -v 'sudo pgrep' 00:40:18.335 ++ awk '{print $1}' 00:40:18.335 + sudo kill -9 00:40:18.335 + true 00:40:18.348 [Pipeline] sh 00:40:18.676 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:30.922 [Pipeline] sh 00:40:31.216 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:31.216 Artifacts sizes are good 00:40:31.232 [Pipeline] archiveArtifacts 00:40:31.240 Archiving artifacts 00:40:31.370 [Pipeline] sh 00:40:31.656 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:31.671 [Pipeline] cleanWs 00:40:31.682 [WS-CLEANUP] Deleting project workspace... 00:40:31.682 [WS-CLEANUP] Deferred wipeout is used... 00:40:31.690 [WS-CLEANUP] done 00:40:31.692 [Pipeline] } 00:40:31.708 [Pipeline] // catchError 00:40:31.720 [Pipeline] sh 00:40:32.007 + logger -p user.info -t JENKINS-CI 00:40:32.017 [Pipeline] } 00:40:32.031 [Pipeline] // stage 00:40:32.036 [Pipeline] } 00:40:32.052 [Pipeline] // node 00:40:32.056 [Pipeline] End of Pipeline 00:40:32.089 Finished: SUCCESS